Getting To Know Your Data: 2.1 Exercises
Getting To Know Your Data: 2.1 Exercises
Getting To Know Your Data: 2.1 Exercises
Exercises
1. Give three additional commonly used statistical measures (i.e., not illustrated in this chapter) for the
characterization of data dispersion, and discuss how they can be computed eciently in large databases.
Answer:
Data dispersion, also known as variance analysis, is the degree to which numeric data tend to spread
and can be characterized by such statistical measures as mean deviation, measures of skewness and
the coecient of variation.
The mean deviation is dened as the arithmetic mean of the absolute deviations from the means
and is calculated as:
n
mean deviation =
i=1
|x x
|
n
(2.1)
where, x
is the arithmetic mean of the values and n is the total number of values. This value will be
greater for distributions with a larger spread.
A common measure of skewness is:
x
mode
s
(2.2)
s
100
x
(2.3)
The variability in groups of observations with widely diering means can be compared using this
measure.
Note that all of the input values used to calculate these three statistical measures are algebraic measures
(Chapter 4). Thus, the value for the entire database can be eciently calculated by partitioning the
database, computing the values for each of the separate partitions, and then merging theses values into
an algebraic equation that can be used to calculate the value for the entire database.
11
12
Answer:
(a) What is the mean of the data? What is the median?
n
The (arithmetic) mean of the data is: x
= n1 i=1 xi = 809/27 = 30. The median (middle value
of the ordered set, as the number of values in the set is odd) of the data is: 25.
(b) What is the mode of the data? Comment on the datas modality (i.e., bimodal, trimodal, etc.).
This data set has two values that occur with the same highest frequency and is, therefore, bimodal.
The modes (values occurring with the greatest frequency) of the data are 25 and 35.
(c) What is the midrange of the data?
The midrange (average of the largest and smallest values in the data set) of the data is: (70 +
13)/2 = 41.5
(d) Can you nd (roughly) the rst quartile (Q1) and the third quartile (Q3) of the data?
The rst quartile (corresponding to the 25th percentile) of the data is: 20. The third quartile
(corresponding to the 75th percentile) of the data is: 35.
(e) Give the ve-number summary of the data.
The ve number summary of a distribution consists of the minimum value, rst quartile, median
value, third quartile, and maximum value. It provides a good summary of the shape of the
distribution and for this data is: 13, 20, 25, 35, 70.
(f) Show a boxplot of the data.
See Figure 2.1.
(g) How is a quantile-quantile plot dierent from a quantile plot?
A quantile plot is a graphical method used to show the approximate percentage of values below
or equal to the independent variable in a univariate distribution. Thus, it displays quantile
information for all the data, where the values measured for the independent variable are plotted
against their corresponding quantile.
A quantile-quantile plot however, graphs the quantiles of one univariate distribution against the
corresponding quantiles of another univariate distribution. Both axes display the range of values
measured for their corresponding distribution, and points are plotted that correspond to the
quantile values of the two distributions. A line (y = x) can be added to the graph along with
points representing where the rst, second and third quantiles lie, in order to increase the graphs
informational value. Points that lie above such a line indicate a correspondingly higher value for
the distribution plotted on the y-axis, than for the distribution plotted on the x-axis at the same
quantile. The opposite eect is true for points lying below this line.
13
40
20
30
Values
50
60
70
2.1. EXERCISES
age
3. Suppose that the values for a given set of data are grouped into intervals. The intervals and corresponding frequencies are as follows.
age
15
615
1620
2150
5180
81110
frequency
200
450
300
1500
700
44
f )l
4. Suppose a hospital tested the age and body fat data for 18 randomly selected adults with the following
result
age
%fat
age
%fat
23
9.5
52
34.6
23
26.5
54
42.5
27
7.8
54
28.8
27
17.8
56
33.4
39
31.4
57
30.2
41
25.9
58
34.1
47
27.4
58
32.9
49
27.2
60
41.2
50
31.2
61
35.7
(a) Calculate the mean, median and standard deviation of age and %fat.
(b) Draw the boxplots for age and %fat.
(c) Draw a scatter plot and a q-q plot based on these two variables.
Answer:
(a) Calculate the mean, median and standard deviation of age and %fat.
For the variable age the mean is 46.44, the median is 51, and the standard deviation is 12.85. For
the variable %fat the mean is 28.78, the median is 30.7, and the standard deviation is 8.99.
14
Values
45
40
25
20
35
15
30
10
25
1
age
1
%fat
Figure 2.2: A boxplot of the variables age and %fat in Exercise 2.4.
(c) Draw a scatter plot and a q-q plot based on these two variables.
See Figure 2.3.
scatter plot
45
40
40
35
35
30
30
fat
fat
qq plot
45
25
25
20
20
15
15
10
10
5
20
25
30
35
40
45
50
age
55
60
65
5
20
25
30
35
40
45
50
55
60
65
age
Figure 2.3: A q-q plot and a scatter plot of the variables age and %fat in Exercise 2.4.
5. Briey outline how to compute the dissimilarity between objects described by the following:
(a) Nominal attributes
(b) Asymmetric binary attributes
(c) Numeric attributes
(d) Term-frequency vectors
2.1. EXERCISES
15
Answer:
(a) Nominal attributes
A categorical variable is a generalization of the binary variable in that it can take on more than
two states.
The dissimilarity between two objects i and j can be computed based on the ratio of mismatches:
d(i, j) =
pm
,
p
(2.4)
where m is the number of matches (i.e., the number of variables for which i and j are in the same
state), and p is the total number of variables.
Alternatively, we can use a large number of binary variables by creating a new binary variable
for each of the M nominal states. For an object with a given state value, the binary variable
representing that state is set to 1, while the remaining binary variables are set to 0.
(b) Asymmetric binary attributes
If all binary variables have the same weight, we have the contingency Table 2.1.
object i
1
0
sum
object j
1
0
q
r
s
t
q+s r+t
sum
q+r
s+t
p
r+s
.
q+r+s
(2.5)
(2.7)
d(i, j) = lim
h
f =1
h1
p
(2.8)
16
xt y
||x||||y||
(2.9)
where xt is a transposition of vector x, ||x|| is the Euclidean norm of vector x,1 ||y|| is the
Euclidean norm of vector y, and s is essentially the cosine of the angle between vectors x and y.
6. Given two objects represented by the tuples (22, 1, 42, 10) and (20, 0, 36, 8):
(a) Compute the Euclidean distance between the two objects.
(b) Compute the Manhattan distance between the two objects.
(c) Compute the Minkowski distance between the two objects, using h = 3.
(d) Compute the supremum distance between the two objects.
Answer:
(a) Compute the Euclidean distance between the two objects.
The Euclidean distance is computed using Equation (2.6).
Therefore, we have (22 20)2 + (1 0)2 + (42 36)2 + (10 8)2 = 45 = 6.7082.
(b) Compute the Manhattan distance between the two objects.
The Manhattan distance is computed using Equation (2.7). Therefore, we have |22 20| + |1
0| + |42 36| + |10 8| = 11.
(c) Compute the Minkowski distance between the two objects, using h = 3.
The Minkowski disance is
(2.10)
Therefore, with h = 3, we have 3 |22 20|3 + |1 0|3 + |42 36|3 + |10 8|3 = 3 233 = 6.1534.
(d) Compute the supremum distance between the two objects.
The supremum distance is computed using Equation (2.8). Therefore, we have a supremum
distance of 6.
7. The median is one of the most important holistic measures in data analysis. Propose several methods
for median approximation. Analyze their respective complexity under dierent parameter settings and
decide to what extent the real value can be approximated. Moreover, suggest a heuristic strategy to
balance between accuracy and complexity and then apply it to all methods you have given.
Answer:
This question can be dealt with either theoretically or empirically, but doing some experiments to get
the result is perhaps more interesting.
We can give students some data sets sampled from dierent distributions, e.g., uniform, Gaussian (both
two are symmetric) and exponential, gamma (both two are skewed). For example, if we use Equation
1 The
2.1. EXERCISES
17
(2.11) to do approximation as proposed in the chapter, the most straightforward way is to divide all
data into k equal length intervals.
N/2 ( f req)l
)width,
median = L1 + (
f reqmedian
(2.11)
where
L1 is the lower boundary of the median interval, N is the number of values in the entire data set,
( f req)l is the sum of the frequencies of all of the intervals that are lower than the median interval,
f reqmedian is the frequency of the median interval, and width is the width of the median interval.
Obviously, the error incurred will be decreased as k becomes larger and larger; however, the time used
in the whole procedure will also increase. Lets analyze this kind of relationship more formally. It
seems the product of error made and time used is a good optimality measure. From this point, we can
do many tests for each type of distributions (so that the result wont be dominated by randomness)
and nd the k giving the best trade-o. In practice, this parameter value can be chosen to improve
system performance.
There are also some other approaches to approximate the median, students can propose them, analyze
the best trade-o point, and compare the results among dierent approaches. A possible way is as
following: Hierarchically divide the whole data set into intervals: at rst, divide it into k regions,
nd the region in which the median resides; then secondly, divide this particular region into k subregions, nd the sub-region in which the median resides; . . . . We will iteratively doing this, until the
width of the sub-region reaches a predened threshold, and then the median approximation formula as
above stated is applied. By doing this, we can conne the median to a smaller area without globally
partitioning all data into shorter intervals, which is expensive (the cost is proportional to the number
of intervals).
8. It is important to dene or select similarity measures in data analysis. However, there is no commonlyaccepted subjective similarity measure. Results can vary depending on the similarity measures used.
Nonetheless, seemingly dierent similarity measures may be equivalent after some transformation.
Suppose we have the following two-dimensional data set:
x1
x2
x3
x4
x5
A1
1.5
2
1.6
1.2
1.5
A2
1.7
1.9
1.8
1.5
1.0
(a) Consider the data as two-dimensional data points. Given a new data point, x = (1.4, 1.6) as a
query, rank the database points based on similarity with the query using Euclidean distance,
Manhattan distance, supremum distance, and cosine similarity.
(b) Normalize the data set to make the norm of each data point equal to 1. Use Euclidean distance
on the transformed data to rank the data points.
Answer:
(a) Use Equation (2.6) to compute the Euclidean distance, Equation (2.7) to compute the Manhattan
distance, Equation (2.8) to compute the supremum distance, and Equation (2.9) to compute the
cosine similarity between the input data point and each of the data points in the data set. Doing
so yields the following table
18
A1
0.66162
0.72500
0.66436
0.62470
0.83205
A2
0.74984
0.68875
0.74741
0.78087
0.55470
Euclidean dist.
0.00415
0.09217
0.00781
0.04409
0.26320
2.2
Supplementary Exercises
1. Briey outline how to compute the dissimilarity between objects described by ratio-scaled variables.
Answer:
Three methods include:
Treat ratio-scaled variables as interval-scaled variables, so that the Minkowski, Manhattan, or
Euclidean distance can be used to compute the dissimilarity.
Apply a logarithmic transformation to a ratio-scaled variable f having value xif for object i by
using the formula yif = log(xif ). The yif values can be treated as interval-valued,
Treat xif as continuous ordinal data, and treat their ranks as interval-scaled variables.