[PDF][PDF] Learning The Structure of Mixed Initiative Dialogues Using a Corpus of Annotated Conversations 1
G Flammia, VW Zue - Fifth European Conference on Speech …, 1997 - groups.csail.mit.edu
G Flammia, VW Zue
Fifth European Conference on Speech Communication and Technology, 1997•groups.csail.mit.eduThis paper reports an ongoing effort to derive linear discourse structures from a corpus of
telephone conversations. First, we would like to determine how reliably human annotators
can tag discourse segments in dialogues. Second, we begin to investigate how to build
machine models for performing this annotation task. To carry out our research, we use a
corpus of transcribed and annotated human-human dialogues in a specific information
retrieval domain (Movie theater schedules). We conducted an experiment in which 25 …
telephone conversations. First, we would like to determine how reliably human annotators
can tag discourse segments in dialogues. Second, we begin to investigate how to build
machine models for performing this annotation task. To carry out our research, we use a
corpus of transcribed and annotated human-human dialogues in a specific information
retrieval domain (Movie theater schedules). We conducted an experiment in which 25 …
Abstract
This paper reports an ongoing effort to derive linear discourse structures from a corpus of telephone conversations. First, we would like to determine how reliably human annotators can tag discourse segments in dialogues. Second, we begin to investigate how to build machine models for performing this annotation task. To carry out our research, we use a corpus of transcribed and annotated human-human dialogues in a specific information retrieval domain (Movie theater schedules). We conducted an experiment in which 25 different dialogues have each been annotated by at least seven different people. We found that the average precision and recall among annotators in placing segment boundaries is 84.3%, and in assigning segment purpose labels is 80.1%. A simple discourse segment parser based on finite state machines is able to cover 56% of the same dialogues. When the finite state grammar is able to analyse a dialogue, it agrees with human annotators in placing segment boundaries with 59.4% precision and 66.4% recall, and it agrees in segment label accuracy at the 59% level.
groups.csail.mit.edu
Showing the best result for this search. See all results