Event Detection in Twitter: Jianshu Weng Yuxia Yao Erwin Leonardi Francis Lee
Event Detection in Twitter: Jianshu Weng Yuxia Yao Erwin Leonardi Francis Lee
Event Detection in Twitter: Jianshu Weng Yuxia Yao Erwin Leonardi Francis Lee
HP Laboratories
HPL-2011-98
Keyword(s):
twitter; event detection; wavelet
Abstract:
Twitter, as a form of social media, is fast emerging in recent years. Users are using Twitter to report
real-life events. This paper focuses on detecting those events by analyzing the text stream in Twitter.
Although event detection has long been a research topic, the characteristics of Twitter make it a non-trivial
task. Tweets reporting such events are usually overwhelmed by high flood of meaningless "babbles".
Moreover, event detection algorithm needs to be scalable given the sheer amount of tweets. This paper
attempts to tackle these challenges with EDCoW (Event Detection with Clustering of Wavelet-based
Signals). EDCoW builds signals for individual words by applying wavelet analysis on the frequency-based
raw signals of the words. It then filters away the trivial words by looking at their corresponding signal
auto-correlations. The remaining words are then clustered to form events with a modularity-based graph
partitioning technique. Experimental studies show promising result of EDCoW. We also present the design
of a proof-of-concept system, which was used to analyze netizens’ online discussion about Singapore
General Election 2011.
External Posting Date: July 06, 2011 [Fulltext] Approved for External Publication
Internal Posting Date: July 06, 2011 [Fulltext]
Abstract
Twitter, as a form of social media, is fast emerging in recent years. Users are
using Twitter to report real-life events. This paper focuses on detecting those events
by analyzing the text stream in Twitter. Although event detection has long been a
research topic, the characteristics of Twitter make it a non-trivial task. Tweets re-
porting such events are usually overwhelmed by high flood of meaningless “bab-
bles”. Moreover, event detection algorithm needs to be scalable given the sheer
amount of tweets. This paper attempts to tackle these challenges with EDCoW
(E vent D etection with C lustering o f W avelet-based Signals). EDCoW builds
signals for individual words by applying wavelet analysis on the frequency-based
raw signals of the words. It then filters away the trivial words by looking at their
corresponding signal auto-correlations. The remaining words are then clustered to
form events with a modularity-based graph partitioning technique. Experimental
studies show promising result of EDCoW. We also present the design of a proof-
of-concept system, which was used to analyze netizens’ online discussion about
Singapore General Election 2011.
1 Introduction
Microblogging, as a form of social media, is fast emerging in recent years. One of
the most representative examples is Twitter, which allows users to publish short tweets
(messages within a 140-character limit) about “what’s happening”. Real-life events are
reported in Twitter. For example, the Iranian election protests in 2009 were extensively
reported by Twitter users. Reporting those events could provide different perspectives
to news items than traditional media, and also valuable user sentiment about certain
companies/products.
This paper focuses on detecting those events to have a better understanding of what
users are really discussing about in Twitter. Event detection has long been a research
topic [23]. The underlying assumption is that some related words would show an in-
crease in the usage when an event is happening. An event is therefore conventionally
represented by a number of keywords showing burst in appearance count [23, 11]. For
example, “iran” would be used more often when users are discussing about the Iranian
∗ This report is an extension of a paper with the same title accepted by ICWSM ’11.
1
election protests. This paper also adapts such representation of event. Nevertheless,
the characteristics of Twitter pose new challenges:
• The contents in Twitter are dynamically changing and increasing. According to
http://tweespeed.com, there are more than 15,000 tweets per minute by average
published in Twitter. Existing algorithms typically detect events by clustering
together words with similar burst patterns. Furthermore, it is usually required to
pre-set the number of events that would be detected, which is difficult to obtain in
Twitter due to its real-time nature. A more scalable approach for event detection
is therefore desired.
2
2 Related Work
Existing event detection algorithms can be broadly classified into two categories: document-
pivot methods and feature-pivot methods. The former detects events by clustering doc-
uments based on the semantics distance between documents [23], while the latter stud-
ies the distributions of words and discovers events by grouping words together [11].
EDCoW could be viewed as a feature-pivot method. We therefore focus on represen-
tative feature-pivot methods here.
In [11], Kleinberg proposes to detect events using an infinite-state automaton, in
which events are modeled as state transitions. Different from this work, Fung et al.
model individual word’s appearance as binomial distribution, and identify burst of each
word with a threshold-based heuristic [6] .
All these algorithms essentially detect events by analyzing word-specific signals in
the time domain. There are also attempts to analyze signals in the frequency domain.
[7] applies Discrete Fourier Transformation (DFT), which converts the signals from the
time domain into the frequency domain. A burst in the time domain corresponds to a
spike in the frequency domain. However, DFT cannot locate the time periods when the
bursts happen, which is important in event detection. [7] remedies this by estimating
such periods with the Gaussian Mixture model.
Compared with DFT, wavelet transformation has more desirable features. Wavelet
refers to a quickly varnishing oscillating function [5, 9]. Unlike the sine and cosine used
in the DFT, which are localized in frequency but extend infinitely in time, wavelets are
localized in both time and frequency domain. Therefore, wavelet transformation is able
to provide precise measurements about when and to what extent bursts take place in the
signal. This makes wavelet transformation a better choice for event detection, and is
applied in this paper to build signals for individual words. It has also been applied to
detect events from Flickr data in [4].
There is recently an emerging interest in harvesting collective intelligence from
social media like Twitter. For example, [17] try to detect whether users discuss any
new event that have never appeared before in Twitter. However, it does not differentiate
whether the new event, if any, is trivial or not. In [19], the authors exploit tweets to
detect critical events like earthquake. They formulate event detection as a classification
problem. However, users are required to specify explicitly the events to be detected.
And a new classifier needs to be trained to detect new event, which makes it difficult to
be extended.
3 Wavelet Analysis
Wavelet analysis is applied in EDCoW to build signal for individual words. This sec-
tion gives a brief introduction of related concepts.
3
ing function. Unlike sine and cosine function of Fourier analysis, which are precisely
localized in frequency but extend infinitely in time, wavelets are relatively localized in
both time and frequency.
The core of wavelet analysis is wavelet transformation. Wavelet transformation
converts signal from the time domain to the time-scale domain (scale can be consid-
ered as the inverse of frequency). It decomposes a signal into a combination of wavelet
coefficients and a set of linearly independent basis functions. The set of basis func-
tions, termed wavelet family, are generated by scaling and translating a chosen mother
wavelet ψ(t). Scaling corresponds to stretching or shrinking ψ(t), while translation
moving it to different temporal position without changing its shape. In other words, a
wavelet family ψa,b (t) are defined as [5]:
t−b
ψa,b (t) = |a|−1/2 ψ( ) (1)
a
where a, b ∈ R, a 6= 0 are the scale and translation parameters respectively, and t is
the time.
Wavelet transformation is classified into continuous wavelet transformation (CWT)
and discrete wavelet transformation (DWT). Generally speaking, CWT provides a re-
dundant representation of the signal under analysis. It is also time consuming to com-
pute directly. In contrast, DWT provides a non-redundant, highly efficient wavelet
representation of the signal. For (1) a special selection of the mother wavelet function
ψ(t) and (2) a discrete set of parameters, aj = 2−j and bj,k = 2−j k, with j, k ∈ Z, the
wavelet family in DWT is defined as ψj,k (t) = 2j/2 ψ(2j t − k), which constitutes an
orthonormal basis of L2 (R). The advantage of orthonormal basis is that any arbitrary
function could be uniquely decomposed and the decomposition can be inverted.
DWT provides a non-redundant representation of the signal S and its values con-
stitute the coefficients in a wavelet series, i.e. < S, ψj,k >= Cj(k). Cj (k) denotes
the k-th coefficient in scale j. DWT produces only as many coefficients as there are
sample points within the signal under analysis S, without loss of information. These
wavelet coefficients provide full information in a simple way and a direct estimation of
local energies at the different scales.
Assume the signal is given by the sampled values, i.e. S = {s0 (n)|n = 1, ..., M },
where the sampling rate is ts and M is the total number of sample points in the sig-
nal. Suppose that the sampling rate is ts = 1. If the decomposition is carried out
over all scales, i.e. NJ = log2 (M ), the signal can be reconstructed by S(t) =
NPJ P N
PJ
Cj (k)ψj,k (t) = rj (t), where the wavelet coefficients Cj (k) can be inter-
j=1 k j=1
preted as the local residual errors between successive signal approximations at scales j
and j + 1 respectively, and rj (t) is the detail signal at scale j, that contains information
of the signal S(t) corresponding with the frequencies 2j−1 ωs ≤ |ω| ≤ 2j ωs .
4
signal S at each scale j (j ≤ NJ ) can be computed as:
X
Ej = |Cj (k)|2 (2)
k
Etotal = Ej (4)
j=1
A normalized ρ-value measures the relative wavelet energy (RWE) at each individual
scale j:
Ej
ρj = (5)
Etotal
NP
J +1
ρj = 1. The distribution {ρj }represents the signal’s wavelet energy distribution
j=1
across different scales [18].
Evaluating the Shannon Entropy [21] on distribution {ρj } leads to the measurement
of Shannon wavelet entropy (SWE) of signal S [18]:
X
SW E(S) = − ρj · log ρj (6)
j
SWE measures the signal energy distribution at different scales (i.e. frequency bands).
H-Measure of signal S is defined as:
4 EDCoW in Detail
This section details EDCoW ’s three main components: (1) signal construction, (2)
cross correlation computation, and (3) modularity-based graph partitioning.
5
sw (t) at each sample point t is given by its DF-IDF score, which is defined as:
PTc
Nw (t) N (i)
sw (t) = × log PTi=1 (9)
N (t) c
i=1 Nw (i)
The first component of the right hand side (RHS) of Eq. (9) is DF (document fre-
quency). Nw (t) is the number of tweets which contain word w and appear after sample
point t − 1 but before t, and N (t) is the number of all the tweets in the same period
of time. DF is the counterpart of TF in TF-IDF (Term Frequency-Inverse Document
Frequency), which is commonly used to measure words’ importance in text retrieval
[20]. The difference is that DF only counts the number of tweets containing word w.
This is necessary in the context of Twitter, since multiple appearances of the same word
are usually associated with the same event in one single short tweet. The second com-
ponent of RHS of Eq. (9) is equivalent to IDF. The difference is that, the collection
size is fixed for the conventional IDF, whereas new tweets are generated very fast in
Twitter. Therefore, the IDF component in Eq. (9) makes it possible to accommodate
new words. sw (t) takes a high value if word w is used more often than others from
t − 1 to t while it is rarely used before Tc , and a low value otherwise.
In the second stage, the signal is built with the help of a sliding window, which
covers a number of 1st-stage sample points. Denote the size of the sliding window
as ∆. Each 2nd-stage sample point captures how much the change in sw (t) is in the
sliding window, if there is any.
In this stage, the signal for word w at current time Tc′ is again represented as a
sequence:
′
Sw = [s′w (1), s′w (2), · · · , s′w (Tc′ )] (10)
Note that t in the first stage and t′ in the second stage are not necessarily in the same
unit. For example, the interval between two consecutive t’s in the first stage could be
10 minutes, while that in the second stage could be one hour. In this case, ∆ = 6.
To compute the value of s′w (t′ ) at each 2nd-stage sample point, EDCoW first moves
the sliding window to cover 1st-stage sample points from sw ((t′ − 2) ∗ ∆ + 1) to
sw ((t′ − 1) ∗ ∆). Denote the signal fragment in this window as Dt′ −1 . EDCoW then
derives the H-measure of the signal in Dt′ −1 . Denote it as Ht′ −1 . Next, EDCoW
shifts the sliding window to cover 1st-stage sample points from sw ((t′ − 1) ∗ ∆ + 1)
to sw (t′ ∗ ∆). Denote the new fragment as Dt′ . Then, EDCoW concatenates segment
Dt′ −1 and Dt′ sequentially to form a larger segment Dt∗ , whose H-measure is also
obtained. Denoted it as Ht∗ . Subsequently, the value of s′w (t′ ) is calculated as:
(
Ht∗ −Ht′ −1
′ ′ Ht′ −1 if (Ht∗ > Ht′ −1 );
sw (t ) = (11)
0 otherwise
If there is no change in sw (t) within Dt′ , there will be no significant difference between
s′w (t′ ) and s′w (t′ − 1). On the other hand, an increase/decrease in the usage of word
w would cause sw (t) in Dt′ to appear in more/fewer scales. This is translated into
an increase/decrease of the wavelet entropy in Dt∗ from that in Dt′ −1 . And s′w (t′ )
encodes how much the change is.
6
1st stage t= 0 1 2 3 4 5 6 7 8 9 ……... Tc
D1 D2
D2*
2nd stage, t'= 0 1 2 3 ……... T’c
∆=3
0.35 0.25
0.15
0.2
flood flood
0.15 orchard orchard
0.1
0.1
0.05
0.05
0 0
2180 2200 2220 2240 2260 2280 2300 365 370 375 380
Time Index (10-min interval) Time Index (60-min interval)
downpour in Singapore, which caused flash flood in the premium shopping belt Or-
chard road. At each sample point in Figure 2(a), Nw (t) is the number of the tweets
published in the past 10 minutes which contains the specific word, while N (t) is the
number of all the tweets published in the same period of time. Figure 2(b) is generated
with ∆ = 6, i.e. one 2nd-stage sample point encodes the change of a word’s appear-
ance pattern in the past 60 minutes. Figure 2 shows that the bursts of the words are
more salient in the corresponding 2nd-stage signals.
By capturing the change of a word’s appearance pattern within a period of time in
one 2nd-stage sample point, it reduces the space required to store the signal. In fact,
event detection needs only the information whether a word exhibits any burst within
certain period of time (i.e. ∆ in the case of EDCoW ). As we can see in Figure 2,
1st-stage signal contains redundant information about the complete appearance history
of a specific word. Nevertheless, most existing algorithms store data equivalent to the
1st-stage signal.
After the signals are built, each word is then represented as its corresponding signal
in the next two components1.
1 In the rest of this paper, “signal” and “word” are used interchangeably.
7
4.2 Computation of Cross Correlation
EDCoW detects events by grouping a set of words with similar patterns of burst. To
achieve this, the similarities between words need to be computed first.
This component receives as input a segment of signals. Depending on the appli-
cation scenario, the length of segment varies. For example, it could be 24 hours, if a
summary of the events happened in one day is needed. It could also be as short as a
few minutes, if a timelier understanding of what is happening is required. Denote this
segment as S I , and individual signal in this segment SiI .
In signal processing, cross correlation is a common measure of similarity between
two signals [14]. Represent two signals as functions, f (t) and g(t), the cross correla-
tion between the two is defined as:
X
(f ⋆ g)(t) = f ∗ (τ )g(t + τ ) (12)
8
θ1 , and θ1 is set as follows:
θ1 = median(AIi ) + γ × M AD(S I ) (14)
Empirically, γ is not less than 10 due to the high skewness of AIi distribution.
Denote the number of the remaining signals as K. Cross correlation is then com-
puted in a pair-wise manner between all the remaining K signals. Currently, the cross
correlation between a pair of signals is calculated without applying time lag2 . Denote
the cross correlation between SiI and SjI as Xij .
It is observed that the distribution of Xij exhibits a similar skewness as the one
shown in Figure 3. Given this, for each signal SiI , EDCoW applies another threshold
θ2 on Xij , which is defined as follows:
θ2 = medianSjI ∈S I (Xij ) + γ × M ADSjI ∈S I (Xij ) (15)
Here, γ is the same as the one in Eq. (14). We then set Xij = 0 if Xij ≤ θ2 .
The remaining non-zero Xij ’s are then arranged in a square matrix to form the
correlation matrix M. Since we are only interested in the similarity between pairs of
signals, the cells on the main diagonal of M are set to be 0. M is highly sparse after
applying threshold θ2 . Figure 4 shows a portion of matrix M built from the data used
in Figure 2. It shows the cross correlation between the top 20 words with the highest
AIi on that day.
Figure 4: Illustration of Correlation Matrix M. The lighter the color of the cell in the
matrix, the higher the similarity between the two signals is, and vice versa.
The main computation task in this component is the pair-wise cross correlation
computation, which apparently has a time complexity of O(n2 ), where n is the num-
ber of individual signals involved in the computation. n is generally very small after
filtering with θ1 (in Eq. (14)). For example, in the experimental studies, less than 5%
of all the words remain after filtering with θ1 . The quadratic complexity is therefore
still tractable.
to one of them. By varying the time lag, it is possible to study the temporal relationship between two words,
e.g. a word appears earlier than another in an event. We plan such study in future work.
9
(V, E, W ). Here, the vertex set V contains all the K signals after filtering with auto
correlation, while the edge set E = V × V . There is an edge between two vertices vi
and vj (vi , vj ∈ V ) if Xij > θ2 , and the weight wij = Xij .
With such a graph theoretical interpretation of M, event detection can then be for-
mulated as a graph partitioning problem, i.e. to cut the graph into subgraphs. Each
subgraph corresponds to an event, which contains a set of words with high cross cor-
relation. And the cross correlation between words in different subgraphs are expected
to be low.
Newman proposes a metric called modularity to measure the quality of such par-
titioning [12, 13]. The modularity of a graph is defined as the sum of weights of all
the edges that fall within subgraphs (after partitioning) subtracted by the expected edge
weight sum if edges were placed at random. A positive modularity indicates P possi-
ble presence of partitioning. We can define node vi ’s P degree as di = j wji . The
sum of all the edge weights in G is defined as m = i di /2. The modularity of the
partitioning is defined as:
1 X di · dj
Q= (wij − )δci ,cj (16)
2m ij 2m
where ci and cj are the index of the subgraph that node vi and vj belong to respectively,
and δci ,cj is the Kronecker delta. δci ,cj = 1 if ci = cj , or δci ,cj = 0 otherwise.
The goal here is to partition G such that Q is maximized. Newman has proposed a
very intuitive and efficient spectral graph theory-based approach to solve this optimiza-
tion problem [13]. It first constructs a modularity matrix (B) of the graph G, whose
elements are defined as:
di · dj
Bij = wij − (17)
2m
Eigen-analysis is then conducted on the symmetric matrix B to find its largest eigen-
value and corresponding eigenvector (− →v ). Finally, G is split into two subgraphs based
→
−
on the signs of the elements in v . The spectral method is recursively applied to each
of the two pieces to further divide them into smaller subgraphs.
Note that, with the modularity-based graph partitioning, EDCoW does not require
extra parameter to pre-set the number of subgraphs (i.e. events) to be generated. It
stops automatically when no more subgraph can be constructed (i.e. Q < 0). This is
one of the advantages EDCoW has over other algorithms.
The main computation task in this component is finding the largest eigenvalue (and
the corresponding eigenvector) of the sparse symmetric modularity matrix B. This can
be efficiently solved by power iteration, which is able to scale up with the increase of
the number of words used in tweets [8].
10
(less than 140 characters), it is not reasonable for an event to be associated with too
many words either.
Given this, EDCoW defines a measurement to evaluate the events’ significance.
Denote the subgraph (after partitioning) corresponding to an event as C = (V c , E c , W c ).
V c is the vertex set, E c = V c × V c , W c contains the weights of the edges, which are
given by a portion of correlation matrix M. The event significance is then defined as:
X
c e1.5n
ǫ=( wij )× , n = |V c | (18)
(2n)!
Eq. (18) contains two parts. The first part sums up all the cross correlation values
between signals associated with an event. The second part discounts the significance
if the event is associated with too many words. The higher ǫ is, the more significant
the event is. Finally, EDCoW filters events with exceptionally low value of ǫ (i.e.
ǫ ≪ 0.1).
5 Empirical Evaluation
To validate the correctness of EDCoW, we conduct an experimental study with a
dataset collected from Twitter.
11
five appearances every day by average is applied5. We further filter words with certain
patterns being repeated more than two times, e.g. “booooo” (“o” being repeated 5
times) and “hahahaah” (“ha” being repeated 3 times). Such words are mainly used for
emotional expression, and not useful in defining events. There are 8,140 unique words
left.
To build signals for individual words, we set the interval between two consecutive
1st-stage sample points to be 10 minutes, and ∆ = 6. By doing so, the final sig-
nals constructed capture the hourly change of individual words’ appearance patterns.
EDCoW is then applied to detect events on every day in June 2010.
word, even the word may appear in one single tweet more than once.
12
ǫ
Day Event Event Description
value
1-3 No event detected
Ruling Democratic Party of Japan elected Naoto Kan as
1. democrat, naoto 0.417
chief.
Korean popular bands Super Junior’s and SS501’s perfor-
4 2. ss501, suju 0.414
mance on mubank.
Related to Event 2, mubank is a popular KBS entertain-
3. music, mubank 0.401
ment show.
4. shindong, Related to Event 2, Shindong and Youngsaeng are mem-
0.365
youngsaeng ber of the two bands.
5. junior, eunhyuk 0.124 Related to Event 2, Eunhyuk is a member of super junior.
5 6. robben, break 0.404 No clear corresponding real-life even
6 No event detected
Two events: Kristen Stewart won some MTV awards, and
7. kobe, kristen 0.417
7 Kobe Bryant in a NBA match.
8. #iphone4, ios4,
0.416 iPhone 4 released during WWDC 2010
iphone
9. reformat, hamilton 0.391 No clear corresponding real-life event
8
10. avocado, com-
0.124 No clear corresponding real-life event
mence, ongoing
A number of users complained they could not use twitter
9 11. #failwhale, twitter 0.360 due to over-capacity.
A logo with whale is usually used to denote over-capacity.
10 12. vuvuzela, soccer 0.387 People started to talk about world cup.
#svk and #svn represent Team Slovakia and Slovenia in
11 13. #svk, #svn 0.418
World Cup 2010.
A match between South Korea and Greece in World Cup
12 14. #kor, greec, #gre 0.102
2010.
13 15. whale, twitter 0.417 Similar as Event 10.
Italy football team coach Marcello Lippi made some com-
14 16. lippi, italy 0.326
ments after a match in World Cup 2010.
Football player Drogba from Ivory Coast is given special
17. drogba, ivory 0.417
15 permission to play in World Cup 2010.
A match between North Korea and Brazil in World Cup
18. #prk, #bra, north 0.114
2010.
16 19. orchard, flood 0.357 Flood in Orchard Road.
17 20. greec, #gre, nigeria 0.122 A match between Greece and Nigeria in World Cup 2010.
A match between Germany and Serbia in World Cup
18 21. #srb, podolski 0.403 2010.
Podolski is a member of Team Germany in World Cup
2010.
19-30 No event detected
Table 1: All the Events Detected by EDCoW in June 2010
13
A larger value of γ filters more signals away. In this case, some of the “relevant”
events, if any, are already filtered before graph partitioning is applied to detect them.
We again manually check the events detected. Although more events (with ǫ > 0.1) are
detected, only one new “relevant” event other than those listed in Table 1 is detected. It
is associated with two words “ghana” and “#gha”, and corresponds to a match between
team Ghana and Serbia on June 13, 2010. There are another eight “relevant” events
out of the total 40 detected events, which correspond to Event 1, 2, 3, 5 (with different
words though), 7, 11, 13, and 20 in Table 1. The precision is 22.5%.
Due to space constraint, the details of the events detected with different values of γ
are omitted here. We only summarize the precision achieved with different γ in Table
2. γ = 40 achieves the best precision among all the settings studied in the experimental
study.
14
Day Topic ID Probability Top Words
13 0.229 flood, orchard, rain, spain, road, weather, singapor, love, cold
48 0.095 time, don, feel, sleep, love, tomorrow, happi, home, hate
16
11 0.091 time, love, don, feel, wait, watch, singapor, hope, life
8 0.079 watch, world, cup, match, time, love, don, south, goal
Table 3: Topics Detected by LDA on June 16, 2010
interpret than the one listed in Table 1. Although “flood” and “orchard” are identified
as the top words for the most related topic on June 16, 2010, they are mixed with
other words as well. It is also not straightforward to see that Topic 8 may be related to
“world cup”. The other two top topics in Table 3 are even more difficult to interpret as
their top-words are all trivial words. Moreover, after setting the number of topics (i.e.
T ), it would always return a distribution over T topics for each document no matter
whether the document (i.e. tweets published within one particular day) has discussed
about any real-life event or not. Further processing is required to improve the results
generated by LDA in the context of event detection, e.g. applying threshold-based
heuristics to filter non-eventful topics and words. In contrast, EDCoW has the ability
to filter trivial words away before applying clustering technique to detect the events.
More importantly, it requires no parameter to specify the number of events. It can
automatically generate different number of events based on users’ discussions in the
tweets.
...
…...
Sentiment
…...
Data Feeder
15
6.1 Data Collection
In the experimental study, we detect events from the tweets of general discussions. In
contrast, in Voters’ Voice, we are interested in a more focused discussion, i.e. SGE
2011. As mentioned earlier, users usually publish tweets about various topics. It is not
reasonable to assume that all the users would be interested in SGE-related topics. If
we apply the same strategy we used to collect tweets in the experimental study, it could
be expected that the events detected would include many non-SGE-related ones. Given
this, we apply a different strategy in collecting the tweets:
1. We identify a set of key phrases that could potentially be used to discuss different
parties in SGE 2011, including political parties’ name, their candidates’ name,
and the constituencies they contest in.
2. We then monitor the Twitter public timeline with Twitter Streaming API7 for
tweets containing any of those key phrases.
3. For each tweet containing any key phrase, we collect it if it is published by
Singapore-based users.
We started collecting tweets from April 13, 2011 till May 13, 2011. There is a total
of 147, 129 tweets collected. Figure 6 illustrates the volume change over time. It
is observed that the the volume change trend coincides with the major milestones in
SGE 2011. There is steep increase in the volume staring from April 27, which was
the nomination day. There is an even more steep increase from May 7, which was
the polling day; and it quickly dies off two days after the polling day. There is an
obvious drop in the volume on May 4, when Prime Minister went online to interact
with netizens on Facebook. Many users therefore switched their discussion venue from
Twitter to Facebook to participate in the online interaction with Prime Minister, which
caused the volume of tweets to drop8.
Number of tweets
Date
16
6.2 Analytics and Visualization of Results
EDCoW is then applied to analyze what the focal points are in the party-specific dis-
cussions on a daily basis, i.e. what are the topics that attract the most significant discus-
sion (as measured by Eq. (18)) about different parties everyday. As mentioned earlier,
a topic is basically a group of non-trivial keywords showing similar usage patterns. To
make the detected topics easier to understand, we further extract entity based on the de-
tected event-related keywords with some intuitive heuristics. For example, for an event
which is represented by a group of keywords including “tin”, “pei”, and “ling”, we are
able to extract “tin pei ling” (Tin Pei Ling is a candidate in SGE 2011) as an entity.
Besides hot topic detection, we also apply sentiment analysis technology [3] to find
out what are netizens’ opinions (positive, neutral, or negative) regarding the detected
topics. We then further aggregate the sentiments on the detected topics to generate the
sentiments about different political parties on a daily basis.
The analytics results (including the detected hot topics and sentiments) are then
visualized and presented to end users. Figure 7 shows some screen captures of the
visualization.
In Figure 7(a), the top right panel lists the significant topics (events) detected by
EDCoW about a party on a particular day. Each vertical bar corresponds to one of such
topics, and the altitude of each bar represents each topic’s significance (as measured by
17
Eq. (18)). When any of the vertical bar is clicked (to select one topic), the bottom
right panel display the sentiment changes over time of all the words/phrases related
to the corresponding topic (recall that a topic is basically a group of related words).
On the left panel, all the tweets related to the corresponding topic are listed, with its
sentiment polarity displayed in different colors (tweets with positive/negative sentiment
are highlighted in green/red respectively, and neutral sentiment is represented with no
color).
Figure 7(b) presents the trends of different parties’ sentiment over time. Users can
choose one party by clicking on one of the seven parties listed on the left panel. Subse-
quently, the selected party’s sentiment trend over time is displayed on the right panel.
For each selected party, there will be two lines: the one on the top shows the trend of
positive sentiment; while the one on the bottom shows that of negative sentiment. Each
point on the two lines displays the ratio of the tweets carrying positive/negative senti-
ment to the total number of tweets about the selected party on one day. Users are also
allowed to select more than one party on the left panel, so that their sentiment trends
could be compared in the right panel.
18
Date Topic Description
Netizens utter sentiments about Vivian Bal-
Vivian Balakrishnan, elec- akrishnan’s (a candidate from People’s Action
Apr 27
tions Party, the ruling party) focusing on non bread-
and-butter issues during the campaign.
Ng Teck Siong (an independent candidate) was
teck siong, disqualified, disqualified from contesting in Tanjong Pagar
Apr 28
deadline constituency on the basis of him submitting his
nomination form late by 35 seconds.
While the polling results being counted, neti-
zens were actively discussing about the possi-
ble outcome. They seems to be interested in Tin
Peiling (a candidate from the ruling party) and
parliament, tin pei ling,
May 7 Sylvia Lim (a candidate from Worker’s Party,
sylvia lim, worried
an opposition party). Sylvia Lim appeared to be
more favorable than Tin Peiling (e.g. “I am so
worried that Sylvia Lim does not get into Parlia-
ment, but Tin Pei Ling does.”).
Netizens show sympathy and appreciation to
Lina Chiam (a candidate from Singapore Peo-
ple’s Party, an opposition party) after the results
were announced officially (e.g. “What a gra-
gracious, loser, lina chiam,
May 8 cious loser, what a loss. SPP’s Lina Chiam over-
listen
heard saying: ’Listen to Sitoh Yih Pin and be
a good resident.”. Sitoh Yih Pin is a candidate
from the ruling party who contested in the same
constituency).
Table 4: Examples of the Events Detected during SGE 2011
conducted. Third, currently EDCoW does not exploit the relationship among users. It
deserves a further study to see how the analysis of the relationship among users could
contribute to event detection. Last but not least, the current design of EDCoW does not
apply time lag when computing the cross correlation between a pair of words. We plan
to introduce time lag and study the interaction between different words, e.g. whether
one word appears earlier than another in one event. This could potentially contribute
to study the temporal evolution of event.
8 Acknowledgements
We would like to thank Prof. Ee-Peng Lim from School of Information Systems, Sin-
gapore Management University for his valuable comments and discussion. We would
also like to thank Meichun Hsu and Malu Castellanos from Information Analytics Lab,
Palo Alto, for sharing the sentiment analysis algorithm which was applied in Voters’
Voice to understand netizens’ sentiments. Credit also goes to Tze Yang Ng, Herryanto
Siatono, Jesus Alconcher Domingo, and Ding Ma from the Applied Research Lab, HP
Labs Singapore, who contributed great effort in implementing Voters’ Voice.
19
References
[1] Edward H. Adelson and James R. Bergen. Spatiotemporal energy models for the
perception of motion. Journal of Optical Society of America A, 2(2):284–299,
February 1985.
[2] David M. Blei, Andrew Y. Ng, and Michael I. Jordan. Latent dirichlet allocation.
Journal of Machine Learning Research, 3:993–1022, 2003.
[3] Malu Castellanos, Riddhiman Ghosh, Mohamed Dekhil, Perla Ruiz, Sangamesh
Bellad, Umeshwar Dayal, Meichun Hsu, and Mark Schreimann. Tapping social
media for sentiments in real-time. In HP TechCon 2011, 2011.
[4] Ling Chen and Abhishek Roy. Event detection from flickr data through wavelet-
based spatial analysis. In CIKM ’09: Proceedings of the 18th ACM conference on
Information and knowledge management, pages 523–532, New York, NY, USA,
2009. ACM.
[5] Ingrid Daubechies. Ten lectures on wavelets. Society for Industrial and Applied
Mathematics, Philadelphia, PA, USA, 1992.
[6] Gabriel Pui Cheong Fung, Jeffrey Xu Yu, Philip S. Yu, and Hongjun Lu. Parame-
ter free bursty events detection in text streams. In VLDB ’05: Proceedings of the
31st international conference on Very large data bases, pages 181–192. VLDB
Endowment, 2005.
[7] Qi He, Kuiyu Chang, and Ee-Peng Lim. Analyzing feature trajectories for event
detection. In SIGIR ’07: Proceedings of the 30th annual international ACM
SIGIR conference on Research and development in information retrieval, pages
207–214, New York, NY, USA, 2007. ACM.
[8] Ilse C.F. Ipsen and Rebecca S. Wills. Mathematical properties and analysis of
google’s pagerank. Boletı́n de la Sociedad Espaǹola de Matemática Aplicada,
34:191–196, 2006.
[9] Gerald Kaiser. A friendly guide to wavelets. Birkhauser Boston Inc., Cambridge,
MA, USA, 1994.
[10] Andreas M. Kaplan and Michael Haenlein. The early bird catches the news: Nine
things you should know about micro-blogging. Business Horizons, To appear:–,
2010.
[11] Jon Kleinberg. Bursty and hierarchical structure in streams. In KDD ’02: Pro-
ceedings of the eighth ACM SIGKDD international conference on Knowledge
discovery and data mining, pages 91–101, New York, NY, USA, 2002. ACM.
[12] M. E. J. Newman. Fast algorithm for detecting community structure in networks.
Physical Review. E, 69(6):066133, Jun 2004.
[13] M. E. J. Newman. Modularity and community structure in networks. Proceedings
of the National Academy of Sciences, 103(23):8577–8582, 2006.
20
[14] Sophocles J. Orfanidis. Optimum Signal Processing. McGraw-Hill, 1996.
[15] Jasmine Osada. Online popularity alone won’t get you elected. In Digital Life,
May 11, 2011, page 14. Straits Times, 2011.
[16] PearAnalytics. Twitter study - august 2009. http://www.pearanalytics.com/wp-
content/uploads/2009/08/Twitter-Study-August-2009.pdf, 2009.
[17] Sas̆a Petrović, Miles Osborne, and Victor Lavrenko. Streaming first story detec-
tion with application to twitter. In NAACL ’10: Proceedings of the 11th Annual
Conference of the North American Chapter of the Association for Computational
Linguistics, 2010.
[18] Osvaldo A. Rosso, Susana Blanco, Juliana Yordanova, Vasil Kolev, Alejandra
Figliola, Martin Schürmann, and Erol Başar. Wavelet entropy: a new tool for
analysis of short duration brain electrical signals. Journal of Neuroscience Meth-
ods, 105(1):65 – 75, 2001.
[19] Takeshi Sakaki, Makoto Okazaki, and Yutaka Matsuo. Earthquake shakes twitter
users: real-time event detection by social sensors. In WWW ’10: Proceedings of
the 19th international conference on World wide web, pages 851–860, New York,
NY, USA, 2010. ACM.
[20] Gerard Salton and Christopher Buckley. Term-weighting approaches in automatic
text retrieval. Information Processing & Management, 24(5):513 – 523, 1988.
[21] Claude E. Shannon. A mathematical theory of communication. Bell System Tech-
nical Journal, 27:623–656, 1948.
[22] Helen Walker. Studies in the History of the Statistical Method. Williams &
Wilkins Co., 1931.
[23] Yiming Yang, Tom Pierce, and Jaime Carbonell. A study of retrospective and on-
line event detection. In SIGIR ’98: Proceedings of the 21st annual international
ACM SIGIR conference on Research and development in information retrieval,
pages 28–36, New York, NY, USA, 1998. ACM.
21