Any-Language Frame-Semantic Parsing

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

Any-language frame-semantic parsing

Anders Johannsen, Héctor Martínez Alonso, Anders Søgaard


Center for Language Technology
University of Copenhagen, Denmark
Njalsgade 140, DK-2300 Copenhagen S
{ajohannsen,alonso,soegaard}@hum.ku.dk

Abstract In contrast to Propbank-style semantic-role la-


beling (Titov and Klementiev, 2012), only very
We present a multilingual corpus of limited frame-semantic resources exist for lan-
Wikipedia and Twitter texts annotated guages other than English. We therefore fo-
with F RAME N ET 1.5 semantic frames in cus on multilingual or cross-language frame-
nine different languages, as well as a semantic parsing, leveraging resources for English
novel technique for weakly supervised and other major languages to build any-language
cross-lingual frame-semantic parsing. Our parsers. We stress that we learn frame-semantic
approach only assumes the existence of parsing models that can be applied to any lan-
linked, comparable source and target lan- guage, rather than cross-lingual transfer models
guage corpora (e.g., Wikipedia) and a for specific target languages. Our approach re-
bilingual dictionary (e.g., Wiktionary or lies on inter-lingual word embeddings (Søgaard
BABEL N ET). Our approach uses a truly et al., 2015a), which are built from topic-aligned
interlingual representation, enabling us to documents. Word embeddings have previously
use the same model across all nine lan- been used for monolingual frame-semantic pars-
guages. We present average error reduc- ing by Hermann et al. (2014).
tions over running a state-of-the-art parser
on word-to-word translations of 46% for Contributions This paper makes the following
target identification, 37% for frame identi- three contributions. We present a new multi-
fication, and 14% for argument identifica- lingual frame-annotated corpus covering five top-
tion. ics, two domains (Wikipedia and Twitter), and
nine languages. We implement a simplified ver-
1 Introduction sion of the frame-semantic parser introduced in
Das et al. (2014). Finally, we show how to modify
Frame-semantic parsing is the task of automati- this parser to learn any-language frame-semantic
cally finding semantically salient targets in text, parsing models using inter-lingual word embed-
disambiguating the targets by assigning a sense dings (Søgaard et al., 2015a).
(frame) to them, identifying their arguments, and
labeling these arguments with appropriate roles. 2 Data annotation
The F RAME N ET 1.5 lexicon1 provides a fixed
repository of semantic frames and roles, which we Figure 1 depicts a F RAME N ET 1.5 frame-semantic
use in the experiments below. analysis of a German sentence from Wikipedia.
The annotator marked two words, Idee and kam,
Several learning and parsing algorithms have
as targets. In frame-semantic parsing, target iden-
been developed for frame-semantic analysis (Jo-
tification is the task of deciding which words (i.e.
hansson and Nugues, 2007; Das et al., 2014; Täck-
targets) trigger F RAME N ET frames. Frame iden-
ström et al., 2015), and frame semantics has been
tification is the problem of disambiguating targets
successfully applied to question-answering (Shen
by labeling them with frames, e.g., C OGITATION
and Lapata, 2007), information extraction (Sur-
or C OMING _ UP _ WITH. Argument identification
deanu et al., 2003) and knowledge extraction (Sø-
is the problem of identifying the arguments of
gaard et al., 2015b).
frames, e.g., Idee for C OMING _ UP _ WITH.
1
https://framenet.icsi.berkeley.edu/ We had linguistically trained students anno-

2062
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2062–2066,
c
Lisbon, Portugal, 17-21 September 2015. 2015 Association for Computational Linguistics.
Figure 1: Frame semantic annotation from the German Wikipedia data (Women’s Rights)

tate about 200 sentences from Wikipedia and 200 Language


tweets each in their native language. The data EN DA ES
was pre-annotated by obtaining all English trans-
Twitter and Wikipedia
lation equivalents of the source language words
TARGET 82.4 81.6 80.0
through BABEL N ET2 , finding associated frames
F RAME 73.5 72.3 60.8
in the F RAME N ET 1.5 training data. We pre-
A RGUMENT 70.7 55.0 83.5
sented annotators with all frames that could be
Test-retest reliability 74.4 78.6 71.8
triggered by any of the target word’s transla-
tions. Both data from Wikipedia and Twit- Twitter
ter cover the same five topics: Google, An- TARGET 79.1 80.7 80.5
gelina Jolie, Harry Potter, Women’s Rights, and F RAME 68.8 72.3 58.6
Christiano Ronaldo. The topics were chosen A RGUMENT 70.0 86.2 57.5
to guarantee coverage for all nine languages, Test-retest reliability 71.0 78.7 73.1
both in Wikipedia and Twitter. Our corpus,
which covers nine languages, is publicly avail- Table 1: Inter-annotator agreement (F1 in %)
able at https://github.com/andersjo/
any-language-frames The languages we
cover are Bulgarian (B G), Danish (DA), German the Cronbach cut-off. Also, we evaluate our mod-
(D E), Greek (E L), English (E N), Spanish (E S), els across 18 datasets, covering nine different lan-
French (F R), Italian (I T) and Swedish (S V). En- guages with two domains each; although for read-
glish is included as a sanity check of our cross- ability, we combine the Wiktionary and Twitter
lingual annotation setup. datasets for each language below.
The English, Danish, and Spanish datasets The relatively low reliability compared to pre-
were doubly-annotated in order to compute inter- vious annotation efforts is due to the cross-lingual
annotator agreement (IAA). The overall target pre-annotation step, which was necessary to make
identification IAA was 82.4% F1 for English, annotation feasible. All languages, including En-
81.6% for Danish, and 80.0% for Spanish. This glish, have been pre-annotated using BABEL N ET.
is lower than a similar monolingual annotation We expect annotators to only assign frames when
experiment recently reporting target identification meaningful frames can be assigned, so the main
IAA at 95.3% (Søgaard et al., 2015b). The frame source of error is that the pre-annotation may ex-
identification IAA scores were also higher in that clude valid frames. Hence, we will not only re-
study, at 84.5% and 78.1% F1 . The drop in port F1 -scores in our evaluations, but also preci-
agreement seems mostly due to pre-tagging er- sion, since recall may be misleading, penalizing
rors caused by erroneous or irrelevant word-to- for frames that could not be chosen by the annota-
word translations. The Spanish data has the lowest tors.
agreement score.
We compute test-retest reliability of our anno- 3 Frame semantic parsing
tations as the correlation coefficient (Pearson’s ρ) 3.1 Target identification
between the two annotations. In Cronbach’s α in-
ternal consistency table, the cut-off for acceptable Following Das et al. (2014), we use part-of-speech
reliability is 0.7. While there is certainly noise in heuristics to identify the words that evoke frames
our annotations, these are still consistently above (target words). Frame-evoking words typically be-
long to a narrow range of part of speech. There-
2
http://babelnet.org/ fore, we only consider words as target candidates

2063
when tagged with one of the top k part-of-speech 3.3 Argument identification
tags most commonly seen as targets in the train-
A frame contains a number of named arguments
ing set. The k parameter is optimized to maxi-
that may or may not be expressed in a given sen-
mize F1 on our development language, Spanish,
tence. Argument identification is concerned with
where we found k = 7.3 Surviving candidates are
assigning frame arguments to spans of words in
then translated into English by mapping the words
the sentence. While this task can benefit from in-
into multi-lingual BABELNET synsets, which rep-
formation on the joint assignment of arguments,
resent sets of words with similar meaning across
Das et al. (2014) report only an improvement of
languages. All English words in the BABEL -
less than 1% in F1 using beam search to approxi-
NET synsets are considered possible translations.
mate a global optimal configuration for argument
If any of the translations are potential targets in
identification. To simplify our system, we take all
F RAME N ET 1.5, the current word is identified as
argument-identification decisions independently.
a frame-evoking word.
We use a single classifier for argument identifica-
tion, computing the most probable argument for
3.2 Frame identification each frame element. Each word index is associ-
ated with a span by the transitive closure of its syn-
A target word is, on average, ambiguous be- tactic dependencies (i.e. subtree). Our greedy ap-
tween three frames. We use a multinomial log- proach to argument identification thus amounts to
linear classifier4 (with default parameters) to de- scoring the n + 1 possible realisations of an argu-
cide which of the possible frames evoked by the ment for an n-length sentence (i.e. subtrees plus
target word that fits the context best. Our feature the empty argument), selecting the highest scor-
representation replicates that of Das et al. (2014) ing subtree for each argument type allowed by the
as far as possible, considering the multilingual set- frame.
ting where lexical features cannot be directly used. As the training data contains very few examples
To compensate for the lack of lexical features, of each frame or role (e.g., Buyer in the frame
we introduce two groups of language-independent C OMMERCE _ SCENARIO), we enable sharing of
features that rely on multilingual word embed- features for frame arguments that have the same
dings. One feature group uses the embedding of name. The assumption is that arguments with
the target word directly, while the other is based on identical names have similar semantic properties
distance measures between the target word and the across frames; that is the argument Perpetrator,
set of English words used as targets for a possible for example, is similar for the frames A RSON and
frame. We measure the minimum and mean dis- T HEFT.
tance (in embedding space) from the target word The scores are the confidences of a binary clas-
to the set of English target words, as well as the sifier trained on <frame, argument, subtree> tu-
distances to each word individually. ples. Positive examples are the observed argu-
Several of the features in the original repre- ments. We use the remaining n incorrect subtrees
sentation are built on top of automatic POS an- for a given <frame, argument> pair to generate
notation and syntactic parses. We use the Uni- negative training examples . A single binary clas-
versal Dependencies v1.1 treebanks for the lan- sification model is trained for the whole data set.
guages in our data to train part-of-speech taggers As with frame identification, our features are
(T REE TAGGER5 ) and a dependency parser (T UR - similar to those of Das et al. (2014), with a few
BO PARSER 6 ) to generate the syntactic features. In exceptions and additions. We use dependency sub-
contrast to Das et al. (2014), we use dependency trees instead of spans and replace all lexical fea-
subtrees instead of spans. tures (which do not transfer cross-lingually) with
features based on the interlingual word embed-
3
The white-listed POS are nouns, verbs, adjectives,
dings from Søgaard et al. (2015a). We use the
proper nouns, adverbs, and determiners. embeddings to find the 20 most similar words in
4
http://hunch.net/~vw/ the training data and use these words to generate
5
http://www.cis.uni-muenchen.de/ lexical features that matched the source-language
~schmid/tools/TreeTagger/
6
http://www.cs.cmu.edu/~ark/ training data. Each feature is weighted by its co-
TurboParser/ sine similarity with the target-language word.

2064
Target identification BG DA DE EL EN ES FR IT SV Avg.
S YSTEM 85.5 73.6 58.4 52.9 80.2 89.1 66.1 69.0 72.8 72.0
F1
BASELINE 44.0 56.8 27.2 46.1 78.8 45.9 42.8 47.7 41.4 47.9
S YSTEM 89.2 70.9 66.2 36.4 96.3 84.9 51.8 53.4 63.4 67.0
Precision
BASELINE 56.8 65.0 48.7 43.2 88.0 75.2 55.0 55.3 47.3 59.4

Frame identification BG DA DE EL EN ES FR IT SV Avg.


S YSTEM 66.6 59.0 49.0 58.3 37.0 36.9 27.4 40.2 49.5 47.1
F1 BASELINE 19.3 14.1 08.5 12.6 48.8 08.2 10.4 15.0 10.1 16.3
MFS 65.3 54.3 53.0 56.2 38.0 34.4 25.5 33.0 55.3 46.1
S YSTEM 72.8 64.7 57.9 67.1 49.3 45.6 36.9 47.1 65.5 56.3
Precision BASELINE 37.0 26.4 19.0 27.9 62.4 15.7 22.0 25.5 28.3 29.7
MFS 67.7 59.4 57.4 60.1 46.1 42.3 33.4 41.5 61.5 52.2

Argument identification BG DA DE EL EN ES FR IT SV Avg.


S YSTEM 40.8 36.0 28.5 39.3 25.3 19.8 18.0 26.3 28.7 29.2
F1
BASELINE 26.5 10.5 06.2 09.7 69.6 04.6 08.6 14.6 08.6 17.7
S YSTEM 39.6 33.3 26.3 36.7 24.0 18.1 16.8 24.8 26.4 27.3
Precision
BASELINE 16.2 09.5 05.7 08.8 66.8 04.1 08.1 13.8 08.0 16.8

Table 2: Frame semantic parsing results (precision and F1 in %)

Baseline Our approach to multi-lingual frame sented in Table 2. Our system is better in six out of
semantics parsing extends Das et al. (2014) to nine cases, whereas the most frequent sense base-
cross-lingual learning using the interlingual em- line is best in two. It is unsurprising that English
beddings from Søgaard et al. (2015a). Our base- fares best in this setup, because it does not undergo
line is a more direct application of the S EMAFOR the word-to-word translation of the other data sets.
system7 (Das et al., 2014), translating target lan- Argument identification is a harder task, and
guage text to English using word-to-word transla- scores are generally lower; see the lower part of
tions and projecting annotation back. For word- Table 2. Also, note that errors percolate: If we do
to-word translation we use Wiktionary bilingual not identify a target, or mislabel a frame, we can
dictionaries (Ács, 2014), and we use frequency no longer retrieve the correct arguments. Never-
counts from U K WAC8 to disambiguate words with theless, we observe that we are better than running
multiple translations, preferring the most common S EMAFOR on word-by-word translations in eight
one. The baseline and our system both use the out of nine languages—all, except English.
training data supplied with F RAME N ET for learn- Generally, we obtain error reductions over our
ing. baseline of 46% for target identification, 37% for
frame identification, and 14% for argument iden-
4 Results tification. For English, we are only 2% (absolute)
below IAA for target identification, but about 40%
Consider first the target identification results in
below IAA for frame and argument identification.
Table 2. We observe that using BABEL N ET and
For Danish, the gap is smaller.
our re-implementation of Das et al. (2014) per-
If we compare performance on Wikipedia and
forms considerably better than running S EMAFOR
Twitter datasets, we see that target identifica-
on Wiktionary word-by-word translations.
tion and frame identification scores are gener-
Our frame identification results are also pre- ally higher for Wikipedia, while argument iden-
7
http://www.ark.cs.cmu.edu/SEMAFOR/ tification scores are higher for Twitter. While
8
http://wacky.sslmit.unibo.it/ Wikipedia is generally more similar to the

2065
newswire/balanced corpus in F RAME N ET 1.5, the
sentence length is shorter in tweets, making it eas-
ier to identify the correct arguments.

5 Conclusions
We presented a multi-lingual frame-annotated cor-
pus covering nine languages in two domains.
With this corpus we performed experiments to
predict target, frame and argument identification,
outperforming a word-to-word translated baseline
running on SEMAFOR. Our approach is a de-
lexicalized version of Das et al. (2014) with a sim-
pler decoding strategy and, crucially, using multi-
lingual word embeddings to achieve any-language
frame-semantic parsing. Over a baseline of using
S EMAFOR with word-to-word translations, we ob-
tain error reductions of 46% for target identifica-
tion, 37% for frame identification, and 14% for ar-
gument identification.

References
Judit Ács. 2014. Pivot-based multilingual dictionary
building using wiktionary. In LREC.
Dipanjan Das, Desai Chen, Andre Martins, Nathan
Schneider, and Noah Smith. 2014. Frame-semantic
parsing. Computational linguistics, 40(1):9–56.
Karl Moritz Hermann, Dipanjan Das, Jason Weston,
and Kuzman Ganchev. 2014. Semantic frame iden-
tification with distributed word representations. In
ACL.
Richard Johansson and Pierre Nugues. 2007. Ex-
tended constituent-to-dependency conversion for
English. In NODALIDA.
Dan Shen and Mirella Lapata. 2007. Using semantic
roles to improve question answering. In EMNLP.

Anders Søgaard, Željko Agić, Héctor Martínez Alonso,


Barbara Plank, Bernd Bohnet, and Anders Jo-
hannsen. 2015a. Inverted indexing for cross-lingual
nlp. In ACL.
Anders Søgaard, Barbara Plank, and Hector Martinez
Alonso. 2015b. Using frame semantics for knowl-
edge extraction from twitter. In AAAI.
Mihai Surdeanu, Sanda Harabagiu, John Williams, and
Paul Aarseth. 2003. Using predicate-argument
structures for information extraction. In ACL.
Oscar Täckström, Kuzman Ganchev, and Dipanjan
Das. 2015. Efficient inference and structured learn-
ing for semantic role labeling. TACL.
Ivan Titov and Alexandre Klementiev. 2012. Crosslin-
gual induction of semantic roles. In ACL.

2066

You might also like