Logical Reasoning

Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

Logical Reasoning for Task Oriented Dialogue Systems

Sajjad Beygi Maryam Fazel-Zarandi


[email protected] [email protected]

Alessandra Cervone Prakash Krishnan Siddhartha Reddy Jonnalagadda


[email protected] [email protected] [email protected]

Abstract

In recent years, large pretrained models have


been used in dialogue systems to improve suc-
cessful task completion rates. However, lack
arXiv:2202.04161v1 [cs.CL] 8 Feb 2022

of reasoning capabilities of dialogue platforms


make it difficult to provide relevant and fluent
responses, unless the designers of a conversa-
tional experience spend a considerable amount
of time implementing these capabilities in ex-
ternal rule based modules. In this work, we
propose a novel method to fine-tune pretrained
transformer models such as Roberta and T5.
to reason over a set of facts in a given dia-
logue context. Our method includes a syn-
thetic data generation mechanism which helps
the model learn logical relations, such as com-
parison between list of numerical values, in-
verse relations (and negation), inclusion and
exclusion for categorical attributes, and appli-
cation of a combination of attributes over both
numerical and categorical values, and spoken
form for numerical values, without need for
additional training dataset. We show that the
transformer based model can perform logical Figure 1: The dialogue system with reasoning ability.
reasoning to answer questions when the dia-
logue context contains all the required infor-
task-oriented dialogue systems, however, only sup-
mation, otherwise it is able to extract appro-
priate constraints to pass to downstream com-
port very limited forms of logical reasoning. More
ponents (e.g. a knowledge base) when partial specifically, although reasoning ability has been in-
information is available. We observe that trans- vestigated as part of chatbots (Cui et al., 2020) and
former based models such as UnifiedQA-T5 question-answering systems (Huang et al., 2019;
can be fine-tuned to perform logical reasoning Chen et al., 2020), in many task-oriented dialogue
(such as numerical and categorical attributes’ systems today, the reasoning is mainly focused on
comparison) over attributes that been seen in determining which slot values are still unknown to
training time (e.g., accuracy of 90%+ for com-
the system but are required and elicit them (Guo
parison of smaller than kmax =5 values over
heldout test dataset). et al., 2017). However, in realistic task-oriented
dialogues, logical reasoning is required to under-
1 Introduction stand the user’s request, ask questions that help
address the user’s task successfully and minimize
Logical reasoning is an important aspect of hu- asking irrelevant questions. The lack of robust,
man thinking and communication. Humans reason generalizable reasoning capabilities for dialogue
over beliefs, preferences, time, facts, and other systems, requires developers of the system to spend
contextual information to achieve complex tasks, a considerable amount of time implementing these
derive meaning, and analyze emotions. Current capabilities in external, rule-based and domain spe-
cific components. This leads to a poor user experi- can learn to reason over numerical and categorical
ence requiring users to often correct the system’s attributes in the dialogue context.
understanding, repeat themselves to ask the same Note that although we use transformers for our
question in different ways, restart the conversation experiments, our proposed method can be used to
when the system fails to recover from a ‘dead-end’, generate data and train any other seq2seq model
or even change their goal. for the same task and be integrated with any di-
In this work, we propose to build on recent ad- alogue system in a similar manner. Furthermore,
vances in research on logical reasoning and deep our proposed method is different from question-
networks (e.g., Dong et al. 2019; Wang et al. 2019; answering or machine reading comprehension in
Xie et al. 2019; Clark et al. 2020; Arabshahi that we are not looking for an answer in a specific
et al. 2020) to bring reasoning capabilities to task- passage; rather, we want the model to reason over
oriented dialogue systems. Our primary focus in facts in the dialogue context to draw parallels and
this work is on mechanisms by which logical rea- conclusions to inform decision making, similar to
soning can be learned and used in conversational how humans reason over a multi-turn conversation.
systems. In this direction, we propose a novel deep
2 Related Work
learning method to fine-tune pretrained models to
reason over numerical and categorical attributes in The approaches for integrating reasoning with deep
the dialogue context and present an architecture networks can be categorized into the following.
for the integration of this model in task-oriented
dialogue systems. Our objective is for the model Reasoning after Semantic Parsing These ap-
to do logical reasoning to respond to queries from proaches convert utterances to a semantic repre-
the dialogue context when it has all the required sentation and feed it to a set of rules or a formal
information available in the dialogue context with- reasoner for reasoning. For example, Kamath and
out additional external logic (e.g., “Add the most Das (2018) provide examples where given a natu-
popular to my cart” in Figure 1), extract constraints ral language utterance and context in the form of a
and inform downstream components when it only relational database, the system first converts the nat-
has partial context (e.g., “Actually I’m allergic to ural language utterance to a SQL query that is then
berries. Find something cheaper and with vanilla executed using standard SQL grammar to retrieve
flavor” in Figure 1, where cheaper means cheaper the answer. This is also similar in approach to how
than what was shown so far), and not provide an some teams that participated in the WikiSQL task
answer when it does not have any relevant infor- (Victor et al., 2017) developed natural language in-
mation and delegate to the dialogue policy to deter- terfaces for relational databases. However, writing
mine the next action. and maintaining rules is not scalable especially as
more complex types of reasoning become needed.
We specifically choose to fine-tune transformers The data annotation itself becomes hard to manage
since these models operate on language directly, efficiently as more functionalities need to be sup-
do not impose any structure on the reasoning pro- ported. Furthermore, deep semantic parsing and
cess (Clark et al., 2020), and we can leverage the reliably extracting attributes and relations and oper-
knowledge and diversity of language that the pre- ating on multi-sentence input remains a challenge.
trained models have already learned. Furthermore,
Ding et al. (2020) recently showed that these ap- Satisfiability-based Approaches Wang et al.
proaches can outperform neuro-symbolic methods. (2019) propose to integrate a differentiable max-
Our approach is similar to recent works on using imum satisfiability solver into the loop of larger
transformers as soft reasoners (Clark et al., 2020; deep learning systems, and use this approach to
Talmor et al., 2020). However, compared to these successfully learn logical structures such as the
methods, we focus on use cases relevant to conver- rules of Sudoku. Previous works have shown that
sational systems and our model goes beyond pre- temporal reasoning can be modeled as a proposi-
dicting a true/false response to directly predicting tional satisfiability problem (Pham et al., 2008);
the answer when the model has the information or however, generalizability to other types of reason-
extract constraints when it has partial information. ing needs further investigation. Although covering
In this direction, we report experimental results a rich class of problems, these approaches impose
that show using our training method transformers a structure on the reasoning problem (Clark et al.,
2020), i.e., learning of logical structure specifically Other Approaches There are also other tech-
as expressed by satisfiability problems. niques in the literature such as integrating rules
defined in first-order logic with knowledge distil-
Neuro-symbolic Approaches Neuro-symbolic lation (Hu et al., 2016) that are outside the above
systems are hybrid models that leverage neural net- categories. There have also been efforts such as
works and symbolic reasoning to integrate learning CLUTRR (Sinha et al., 2019), bAbI dataset (We-
and reasoning. Besold et al. (2017) provide a sur- ston et al., 2015), Single Rule Test (Richardson
vey of how symbolic approaches for reasoning are et al., 2020), QuaRTz dataset (Tafjord et al., 2019),
integrated with the machine learning approaches HotpotQA (Yang et al., 2018), and ROPES (Rea-
that bring in reasoning. More recently, Dong et al. soning over Paragraph Effects in Situations) (Lin
(2019) propose Neural Logic Machines and apply et al., 2019), that focus on creating benchmarks for
them to different tasks such as relational reason- reasoning that measure how well existing systems
ing and sorting. Arabshahi et al. (2020) propose perform on generalized reasoning.
an end-to-end differentiable solution that uses a
Prolog proof trace to learn rule embeddings from
3 Problem Statement
data, and apply their approach to the task of un-
covering commonsense presumptions. Similarly, Task-oriented dialogue systems use a natural lan-
Xie et al. (2019) generate a graph model to em- guage understanding component to extract seman-
bed logic rules into the prediction. However, Ding tic meaning from the user utterance, and elicit
et al. (2020) show that a fully-learned neural net- constraints from users to understand their goals
work with the right inductive biases can outperform in order to provide information, perform a task
neuro-symbolic approaches in the context of spatio- or provide options and alternatives for users to
temporal interactions between objects. choose from, retrieved from external knowledge
sources (e.g, through API calls). As such, we
Transformer Approaches Clark et al. (2020 )
focus on reasoning over tasks and recommended
and Talmor et al. (2020) propose to train trans-
items in the dialogue which are typically charac-
formers to reason over natural language sentences,
terized by different attributes, for example, movie
bypassing a formal representation and show such
names and show-times for a ticket booking sce-
reasoning over language is learnable. Ding et al.
nario. These systems rely on such representa-
(2020) apply a similar technique to visual ques-
tions to answer user queries such as “At what
tion answering and show that their approach out-
time is Vertigo playing?” by performing API
performs neuro-symbolic approaches. Han et al.
calls (e.g. searchTime(movie=Vertigo)) which
(2020) use a similar approach to fine-tune a lan-
return the required information in a structured
guage model for event temporal reasoning. Our
form (Movie=Vertigo, Times=[12:30-2:30 PM, 3-
approach builds on top of these works in that we in-
5 PM], Theater=Cineplex). The required infor-
tegrate reasoning into task-oriented dialogues and
mation is then returned to the user in natural lan-
go beyond predicting a true/false response for an
guage (e.g. Vertigo is playing today from 12.30
input and instead directly predict the answer when
to 2.30 PM and from 3 to 5 PM.). However,
the model has the information or extract constraints
in most currently available task-oriented dialogue
when it has partial information.
systems if the user said next “Book me the ear-
Knowledge Grounding in Dialogue Similar to liest one,” although this information is already
how Victor et al. (2017) retrieve knowledge from available to the system from the previous API
Wikipedia, approaches such as (Ghazvininejad call, given the lack of reasoning abilities the sys-
et al., 2018; Neelakantan et al., 2019; Gopalakr- tem would either not support such queries, or it
ishnan et al., 2019) retrieve knowledge from a would have to make an additional independent
database to be incorporated into dialogue. These API call (e.g., searchEarliestTime(movie=Vertigo)
approaches extend the seq2seq approach to con- or searchTime(movie=Vertigo, modifier=earliest)),
dition on the facts present in the knowledge creating redundant latency in the response and
bases. While this is a promising architecture, requiring the developer of the system to add
such approaches are good for applications such as APIs/rules to handle these use cases.
knowledge-grounded open domain chat but not for Given the above description, our objective is
supporting reasoning in task-oriented dialogues. to train a model to learn how to reason over the
back-end Knowledge Base (KB) or can predict a
list of natural language generation (NLG) actions
for communicating information to the user (request-
ing constraints, informing available options, etc.).
The reasoning model is added as a modular compo-
nent that runs along-side the dialogue policy model.
Although it would be possible to combine the two
models, e.g, by extending the reasoning model to
also predict domain specific APIs and actions, we
believe that this modular architecture allows the
Figure 2: The reasoning model can be easily integrated
in task-oriented dialogue architecture, as a component reuse of a trained reasoning model across different
of the Dialogue Manager, i.e., the module in charge of domains and tasks.
predicting the next system action.
4 Method
information provided in the context. We assume
the following scenarios for each user utterance: In this work we propose to fine-tune transformers
1. Reasoning-required, answer available in to learn logical reasoning over dialogue context in
the context: The case where the user utterance the form of natural language sentences, bypassing a
requires reasoning and it is possible to infer the formal representation and showing such reasoning
answer to the user query from the information re- over language is learnable.
turned by the previous API calls (e.g., “Give me the
4.1 Data Generation
earliest one”). Rather than extracting mentions and
querying the knowledge base again, in this case the We describe a general methodology1 for automat-
model directly outputs the predicted next system ically creating a dataset for logical reasoning in
action along with its arguments. task-oriented dialogue systems. Each example in
2. Reasoning-required, answer not available the dataset is a triple (user-query, context, answer),
in the context: The case where the user utterance where the user-query refers to the last user utter-
requires reasoning, but it is not possible to infer ance, the context refers to the dialogue context and
the answer to the user query from the information information returned by API calls to the back-end
returned by the previous API calls (e.g., “Show me system (see an example in Figure 1), and the answer
cheaper options”). In this case the model extracts refers to the next action to be taken by the dialogue
constraints from the user utterance to be passed to system. The user-query and the context constitute
the back-end API. the information given as input to the model, while
3. Reasoning-not-required: The case where the answer represents the output.
the user utterance does not require reasoning (e.g., In order to simulate the context, the objects re-
“Please repeat”). turned by API calls to the back-end system, we
In order to support these scenarios, the model assume an available knowledge base (KB). We fur-
needs to learn to 1) compare between different ther assume that the KB will have different items,
items based on numerical and categorical attributes, identified by an item-name (e.g., Yogurt Anisakis),
2) compare across a list of numerical values to iden- an item-type (e.g., yogurt), and a series of attributes,
tify the minimum/maximum value among alterna- each with an attribute key and value (e.g., price:
tives, 3) be able to formulate constraints when it is $3.40). For generalizability, we do not assume that
not possible to infer the answer to the user query all item types have the same attributes, nor that all
given the dialogue context but partial inference can items of the same type have the same attributes.
be made, and 4) respond no answer when no rea- The data generation procedure consists of four
soning is required for answering the user’s request. main steps:
Figure 2 shows the overall architecture of a dia- 1. Items sampling: In order to construct input-
logue system with the reasoning model. The new output pairs for training, we first randomly select
model is part of the dialogue manager which pre- k items, where 0 ≤ k ≤ kmax , with the same
dicts the next system action, along side a domain item-type to create the input context c. While in
specific dialogue policy. The dialogue policy can 1
Upon acceptance, we plan to release the data generated
predict API calls for retrieving information from a with the described methodology with the research community.
Figure 3: Task structure for the generative model.
this work we compare items of the same item-type, that templates for the queries are manually created
this is not a strict requirement of data generation. for each attribute, but they are all agnostic from
The motivation behind this choice is given by a the domain of the task-oriented dialogue system.
typical scenario of a task-oriented dialogue sys- Examples of user queries are shown in Table 1. As
tem where a user might search for a specific object it can be seen, each template for the user query was
(movie times of Vertigo) and the system would sub- associated to the expected output action predicted
sequently present different options for that object by the system and the particular reasoning ability
(“Vertigo is playing today from 12:30 to 2:30 PM involved (e.g., Inform). We also consider more
and from 3 to 5 PM.”). complex cases such as negation, e.g., “I don’t want
2. Context conversion to pseudo-language: anything vegan,” and conjunction, e.g., “Which is
Once a set of items has been sampled, we trans- the cheapest one and doesn’t have strawberry?”.
form the structured information (list of triplets) Additionally, each template is associated with sev-
associated to each item into pseudo-language by eral different surface form variations to add robust-
using a template-based approach, as in Figure 3. ness to the model. Each generated user query is
Our templates are constructed in a domain-agnostic then prepended to the context c. An additional
way, so that they would be directly applicable to optional post-processing step consists of convert-
other scenarios. We define two main types of state- ing all the numerical values in the user queries
ments in pseudo-language, each one associated to from written to spoken format (e.g. “$3.50” is con-
a specific template (see first two rows in Table 1). verted to “three dollars fifty”). This step might be
The IsA template is used to define the type of an required in the context of a spoken dialogue system
item, while the HasAttribute relation is used scenario, which takes directly as input the output
for triplets expressing the value of a given attribute of the Automatic Speech Recognition model.
for the specified item. We note that other templates 4. Output creation: In the final step, for each gen-
for the context statements could easily be created erated input, we automatically create the output by
to accommodate different scenarios. Finally, we combining the information from each template in
concatenate all the generated statements, after ran- regards to the action type to take and calculating
domizing their order for improving robustness, to the correct answer from the context, e.g., Yogurt
form the final input context. Anisakis is the cheapest. The output space con-
3. Query generation: In this step we generate a sists of four main outcomes, as shown in Table
set of user queries q suitable for the given context 2, depending on whether reasoning is required to
using templates, thus generating several number respond to the user utterance, and whether the an-
of different input pairs (c, qi ) where i is an index swer is retrievable from the available context. We
over possible queries related to the context c. Note use the special token NoAnswer for user queries

Type Name Template Example


Context Statement IsA [subject] is [object]. Yogurt Anisakis is a yogurt.
Context Statement HasAttribute [subject] has attribute [attribute] with value [value]. Yogurt Anisakis has attribute price with value 3.55.
User Query Inform I want something [predicate] [value]. I want something cheaper than $5.
Which one is [predicate]? Which one is the cheapest?
User Query Inform_TrueFalse Is [subject] [predicate]? Is Yogurt Anisakis the cheapest?
User Query Select Select [predicate]. Select the cheapest.

Table 1: Examples of templates for context statements (in pseudo-language) and user queries (in natural language)
Reasoning Answer Action Example Output
Required in Context Type
Yes Yes Inform Is the first one cheaper than the second one? inform <true/false>
Yes Yes Inform Which one is the cheapest? inform <item_name>
Yes Yes Select Add the cheapest to my cart. select <item_name>
Yes No Constraint Give me something cheaper <relation> <attribute> <value>
No − No Answer Find yogurt. NoAnswer

Table 2: Output space. In cases where there are multiple answers/constraints, they are concatenated with and.

User Utterance Constraint two types of clues: 1) Comparative clue describes


Give me something vegan. include diet vegan
I don’t want mango. exclude flavor mango a comparison of two items in the context along
It should cost $1.50. equal price 1.50 a specific attribute. The template for this clue is:
I want it cheaper than $2 less-than price 2 [subject] is [predicate] [object], where
Anything more popular? more-than rating 4.5
predicate refers to the quality regarding which
Table 3: Examples of constraints representation, given the items are being judged (e.g., “cheaper than”,
as context the one in Figure 2. “pricier than”, “less than”, “equal to”). 2) Superla-
tive clue describes an object at the upper/lowest
range of a specific attribute. The template for this
that do not require reasoning. When the answer clue is: [subject] is [predicate] with value
is retrievable from the context and reasoning is re- [value]. Using the base data generation and clue
quired, we further distinguish between two main generation, we are able to construct three types of
cases: inform, when the user is simply seeking training scenarios, as follows:
information (e.g., “Which one is the cheapest?”), Case I - Clueless context: This scenario uses
thus performing an Information-Transfer type of the base context encompassing the information
Dialogue Act (see Bunt et al. (2010)), and select, about the items’ different attributes. This is also
when the user is requesting the system to perform the scenario we expect at inference time.
a specific action (e.g., “Add the cheapest to my Case II - Comparative clues: In this scenario,
cart.”), an Action-Discussion Dialogue Act. For we sort the items in the base context according to
the inform action, we also distinguish in the out- the values of their attributes and append to the base
put space between True/False questions and open- context the comparative relation between pairs of
answer questions. items that are neighbors. The direction of compar-
In the case of constraint extraction answers, i.e., ison selected is random (e.g. “A is larger than B”
when the user utterance requires reasoning but the or “B is smaller than A”) and independent from
context has partial information, the output consists the user query. This scenario is designed to assess
of the list of constraints extracted from the user the ability of the model to learn inverse relations,
query and concatenated with and, as shown in since in some queries users will ask for a relation
Table 3. The constraints extracted from the user in the opposite direction in regards to the compar-
query depend on the context, not only in terms ative clue in the context (e.g., user asks “Is the
of action to take (whether to provide an answer second one cheaper than the first one?” while in
directly or to extract constraints), but also in terms the context we have “A is pricier than B”), so that
of constraints generation. In the last row of Table the model could learn that these two statements are
3, for user query ("..more popular?") the reasoning equivalent. When we have more than two items in
model relies on the context by looking at the ratings context, we can also assess the ability of the model
of the available products to extract the appropriate to learn transitivity, as we might have cases where
rating constraint (e.g, more-than rating 4.5). the user asks “Is the first one pricier than the third
one?” and in the context we have “A is pricier than
4.2 Training Procedure B” and “B is pricier than C”.
In order to teach the model rules such as inverse Case III - Superlative clues: In this scenario,
relations and transitivity by example, we investi- besides comparative clues, we also add superlative
gate the use of appending to the context clues that clues to the context to give hints to the model about
describe the relations of one or more items. These which item in the context has the extreme value of
clues are appended to the final input context during the attributes (e.g. “A is the cheapest”).
training, but not at inference time. We consider We pick the number of items in each context
Rating Price Diet Flavor Train/Test I/I II/II III/III
Bounded Unbounded 10 10K 2 items 90% 97% 97%
Numeric Numeric 3 items 88% 95% 95%
5 items 77% 91% 93%
Table 4: Attributes and their catalogs size.
Table 5: Roberta-Base model performance for T/F Rea-
soning over Price and Rating.
randomly from 0 to kmax , so that the model can
be robust in its prediction for different number of
items in the context. We also consider an additional Train → Case II Case III
training procedure, which we refer to as Case IV, Test ↓ (5 items) (5 items)
Case I, (2 items) 75% 76%
where we randomly select one of Case I, Case II,
Case I, (3 items) 70% 71%
or Case III as our context. The random selection of Case I, (5 items) 67% 69%
context helps the model to experience all three dif-
ferent cases and by cross learning between different Table 6: Train on Case II or Case III with 5 items in all
cases, it learns to apply the inverse and transitivity the contexts and test on Case I with 2, 3, or 5 items.
rules for examples with Case I context to draw the
right conclusion.
ison of two values of an attribute, or finding min-
5 Experiments imum or maximum value among list of values of
We showcase our proposed methodology in the con- an attribute for several items. Example queries in-
text of a dialogue system for a shopping assistant clude “is the second item the cheapest one” and
(see Appendix A for an example interaction). We “is the first one cheaper than the fourth one”. We
use an ontology for data generation which consists fine-tune RoBERTa to predict True/False for each
of item-type (e.g. yogurt) and item-name (q, c) by adding a classification layer on top of
(“Greek yogurt Anisakis”) and each item is char- the RoBERTa encoder model to perform binary
acterized by two numerical attributes price and classification. The training hyper-parameters for
rating, and two categorical attributes diet and fine-tuning this model are provided in Appendix B.
flavor. This choice of attributes can help us ex- For these experiments, we generate 120K samples
plore and assess the model’s performance based for train, 5K for dev, and 25K for test set.
on attribute’s characteristics. Table 4 summarizes Clueless Training: In this case, we only add
the size of the catalog or range of values for each IsA and HasAttribute relations and don’t in-
attribute. clude any clue in the context c in the training data
We consider two settings for assessing the logi- (i.e., Case I). For each generated context, the data
cal reasoning capability of transformer models. In generation process attaches all possible forms of
the first setting, we fine-tune RoBERTa-base (Liu queries and the potential true/false label and adds
et al., 2019) with a training dataset generated for them to training samples. For evaluation, we gen-
reasoning using only numerical attributes. In this erate the test samples in a similar fashion. Table 5
setting, we only focus on True/False prediction for summarizes the model performance for predicting
each query q given the facts provided in the context the right label for each query given the context with
c. The objective of this experiment is to understand k ∈ 2, 3, 5 number of items in the context. We can
whether transformer models can learn to reason see that by increasing the context size (or number
over numerical attributes. In the second setting, of returning items from back-end) the model perfor-
we use a T5 model (Raffel et al., 2019) fine-tuned mance decreases. To understand how well a model
for the UnifiedQA data (Khashabi et al., 2020), to with larger k with comparative or superlative clues
predict a sequence similar to one given in Table can generalize to fewer number of items in context,
2. In both cases, we use disjoint catalogs to gen- Table 6 shows the performance of a model trained
erate examples for train/dev/test datasets to avoid with context size of 5 items using Case II or Case
over-fitting to attribute values. III samples and tested on samples generated by
Case I and with k ∈ 2, 3, 5 items. We observe that
5.1 True/False Queries the model does not generalize to different context
We consider True/False reasoning over attributes sizes if we fix the number of items in the context
such as assessing a conclusion about the compar- during model training.
Train/Test IV/I IV/II IV/III # of Attr.s km Inform/Select Extract
up-to 5 items 98.70% 99.70% 99.70% 0 – 99.5±0.02%
1 98.6±0.05% 99.2±0.03%
Table 7: Training with CaseIV: Roberta model perfor- 1 2 97.3±0.05% 98.5±0.05%
mance for T/F reasoning over numerical attributes. 3 97.0±0.05% 98.0±0.03%
4 96.0±0.10% 98.0±0.05%
5 95.5±0.09% 96.0±0.06%
0 – 98.6±0.03%
Clue-Aware Training: To resolve the issues in
1 98.5±0.05% 97.8±0.02%
clueless training, we add comparative and superla- 2 2 95.0±0.08% 96.7±0.01%
tive clues randomly to each context during the train- 3 94.5±0.05% 96.3±0.03%
ing such that the model can learn the inverse and 4 91.5±0.09% 95.0±0.03%
5 90.0±0.11% 93.5±0.06%
transitivity rules; and also we add random number
of items to each individual context (up to kmax ). Table 8: EM accuracy for test sets with different num-
Note that we do not add clues to the context dur- ber of attributes, context size, and reasoning task.
ing evaluation/inference. Results in Table 7 show
the accuracy performance of models trained using
samples generated by Case IV and tested on Case I or Extract output form becomes harder with more
(clue-less), Case II (only comparative clues), and attributes in the user query. Evaluating the model
Case III (both comparative and superlative clues) performance on all examples (about 8K samples)
samples. From the results, we observed that adding from our test set that include spoken form of numer-
clues during model training helps the model to ical values in q (e.g., “Give me something cheaper
achieve better performance. than five dollars”), we observe 95% EM accuracy,
showing the ability of the model to compare writ-
5.2 Beyond True/False Queries ten form and spoken form versions of numbers.
For this set of experiments, we pick the T5 trans- We should note that the accuracy of the model
former model which can enable us to perform for predicting the cases with no reasoning (e.g.,
text-to-text prediction. Similar to (Khashabi et al., “Checkout please”) is important because it makes
2020), we remove the task prefix that has been used the integration with the overall dialogue system
in the original T5 models, since we will use this simpler where the model can delegate to the do-
model only for a single reasoning task within our main specific dialogue policy. In our experiments,
defined framework. To take advantage of transfer we observe an accuracy of 100% on these cases;
learning from other publicly available question- however, this value can vary by increasing the size
answering datasets, we start our fine-tuning from of out-of-domain space/vocabulary.
the pretrained Unified-QA-T5 small model. We
6 Conclusions
generate 100K samples for training dataset, 5K for
dev, and 20K examples for each test set. In our In this paper, we proposed an architecture for the
test set we make sure that for each element in Ta- integration of a reasoning model in task-oriented
ble 8, we have at least 5K examples. Samples are dialogue systems. We formulated the problem as
generated as described in Section 4.1. The train- a sequence prediction problem given a user query
ing hyper-parameters for fine-tuning this model are and context, and presented an approach for gen-
provided in Appendix B. erating data and fine-tuning generative models to
In Table 8, we summarize the performance of the reason over a set of facts in the dialogue context.
fined-tuned model for different scenarios, reporting We demonstrated our approach for a shopping assis-
the results separately for pair of (q, c) such that q tant and reported experimental results for different
can have one (e.g., “Give me something organic’) formulations of the problem. We showed that these
or two attributes (e.g., ‘Something cheaper than models can learn to do logical reasoning to 1) an-
$100 but not vegan”) about user-preferences. We swer questions form the dialogue context when all
use the exact-match (EM) accuracy metric to eval- the information is available, 2) extract constraints
uate model performance. We can observe that the when partial information is available, and 3) dele-
model can achieve an EM accuracy of over 90% gate to the dialogue policy when no reasoning is
across all the scenarios. Furthermore, we see that required. For future work, we plan to investigate
when increasing the number of items in the reason- the application of our method to other reasoning
ing context, predicting the correct Inform/Select tasks (e.g., temporal and spatial reasoning). Fur-
thermore, we plan to explore how logical reasoning Xiaoxiao Guo, Tim Klinger, Clemens Rosenbaum,
can be used to disambiguate with the user when Joseph P Bigus, Murray Campbell, Ban Kawas,
Kartik Talamadupula, Gerry Tesauro, and Satinder
multiple conclusions can be made.
Singh. 2017. Learning to query, reason, and answer
questions on ambiguous texts.
References Rujun Han, Xiang Ren, and Nanyun Peng. 2020. Deer:
A data efficient language model for event temporal
Forough Arabshahi, Jennifer Lee, Mikayla Gawarecki, reasoning. arXiv preprint arXiv:2012.15283.
Kathryn Mazaitis, Amos Azaria, and Tom Mitchell.
2020. Conversational neuro-symbolic common- Zhiting Hu, Xuezhe Ma, Zhengzhong Liu, Eduard
sense reasoning. arXiv preprint arXiv:2006.10022. Hovy, and Eric Xing. 2016. Harnessing deep
neural networks with logic rules. arXiv preprint
Tarek R Besold, Artur d’Avila Garcez, Sebastian Bader, arXiv:1603.06318.
Howard Bowman, Pedro Domingos, Pascal Hitzler,
Kai-Uwe Kühnberger, Luis C Lamb, Daniel Lowd, Lifu Huang, Ronan Le Bras, Chandra Bhagavatula, and
Priscila Machado Vieira Lima, et al. 2017. Neural- Yejin Choi. 2019. Cosmos QA: Machine reading
symbolic learning and reasoning: A survey and in- comprehension with contextual commonsense rea-
terpretation. arXiv preprint arXiv:1711.03902. soning. In Proceedings of the 2019 Conference on
Empirical Methods in Natural Language Processing
Harry Bunt, Jan Alexandersson, Jean Carletta, Jae- and the 9th International Joint Conference on Natu-
Woong Choe, Alex Chengyu Fang, Koiti Hasida, ral Language Processing (EMNLP-IJCNLP), pages
Kiyong Lee, Volha Petukhova, Andrei Popescu- 2391–2401, Hong Kong, China. Association for
Belis, Laurent Romary, Claudia Soria, and David Computational Linguistics.
Traum. 2010. Towards an ISO standard for dialogue
act annotation. In Proceedings of the Seventh In- Aishwarya Kamath and Rajarshi Das. 2018. A
ternational Conference on Language Resources and survey on semantic parsing. arXiv preprint
Evaluation (LREC’10), Valletta, Malta. European arXiv:1812.00978.
Language Resources Association (ELRA).
Daniel Khashabi, Tushar Khot, Ashish Sabharwal,
Xiuying Chen, Zhi Cui, Jiayi Zhang, Chen Wei, Jian- Oyvind Tafjord, Peter Clark, and Hannaneh Ha-
wei Cui, Bin Wang, Dongyan Zhao, and Rui Yan. jishirzi. 2020. Unifiedqa: Crossing format
2020. Reasoning in dialog: Improving response gen- boundaries with a single QA system. CoRR,
eration by context reading comprehension. CoRR, abs/2005.00700.
abs/2012.07410.
Kevin Lin, Oyvind Tafjord, Peter Clark, and Matt Gard-
Peter Clark, Oyvind Tafjord, and Kyle Richardson. ner. 2019. Reasoning over paragraph effects in situ-
2020. Transformers as soft reasoners over language. ations. arXiv preprint arXiv:1908.05852.
arXiv preprint arXiv:2002.05867.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man-
Leyang Cui, Yu Wu, Shujie Liu, Yue Zhang, and Ming dar Joshi, Danqi Chen, Omer Levy, Mike Lewis,
Zhou. 2020. Mutual: A dataset for multi-turn dia- Luke Zettlemoyer, and Veselin Stoyanov. 2019.
logue reasoning. CoRR, abs/2004.04494. Roberta: A robustly optimized BERT pretraining ap-
proach. CoRR, abs/1907.11692.
David Ding, Felix Hill, Adam Santoro, and Matt
Botvinick. 2020. Object-based attention for spatio- Ilya Loshchilov and Frank Hutter. 2017. Decou-
temporal reasoning: Outperforming neuro-symbolic pled weight decay regularization. arXiv preprint
models with flexible distributed architectures. arXiv arXiv:1711.05101.
preprint arXiv:2012.08508.
Arvind Neelakantan, Semih Yavuz, Sharan Narang,
Honghua Dong, Jiayuan Mao, Tian Lin, Chong Wang, Vishaal Prasad, Ben Goodrich, Daniel Duckworth,
Lihong Li, and Denny Zhou. 2019. Neural logic ma- Chinnadhurai Sankar, and Xifeng Yan. 2019. Neu-
chines. arXiv preprint arXiv:1904.11694. ral assistant: Joint action prediction, response gen-
eration, and latent knowledge reasoning. CoRR,
Marjan Ghazvininejad, Chris Brockett, Ming-Wei abs/1910.14613.
Chang, Bill Dolan, Jianfeng Gao, Wen-tau Yih, and
Michel Galley. 2018. A knowledge-grounded neural Duc Nghia Pham, John Thornton, and Abdul Sattar.
conversation model. Proceedings of the AAAI Con- 2008. Modelling and solving temporal reasoning
ference on Artificial Intelligence, 32(1). as propositional satisfiability. Artificial Intelligence,
172(15):1752–1782.
Karthik Gopalakrishnan, Behnam Hedayatnia, Qin-
lang Chen, Anna Gottardi, Sanjeev Kwatra, Anu Colin Raffel, Noam Shazeer, Adam Roberts, Katherine
Venkatesh, Raefer Gabriel, and Dilek Hakkani- Lee, Sharan Narang, Michael Matena, Yanqi Zhou,
Tür. 2019. Topical-Chat: Towards Knowledge- Wei Li, and Peter J. Liu. 2019. Exploring the limits
Grounded Open-Domain Conversations. In Proc. In- of transfer learning with a unified text-to-text trans-
terspeech 2019, pages 1891–1895. former. CoRR, abs/1910.10683.
Kyle Richardson, Hai Hu, Lawrence Moss, and Ashish
Sabharwal. 2020. Probing natural language infer-
ence models through semantic fragments. In Pro-
ceedings of the AAAI Conference on Artificial Intel-
ligence, volume 34, pages 8713–8721.
Koustuv Sinha, Shagun Sodhani, Jin Dong, Joelle
Pineau, and William L Hamilton. 2019. Clutrr: A
diagnostic benchmark for inductive reasoning from
text. arXiv preprint arXiv:1908.06177.
Oyvind Tafjord, Matt Gardner, Kevin Lin, and Peter
Clark. 2019. Quartz: An open-domain dataset of
qualitative relationship questions. arXiv preprint
arXiv:1909.03553.
Alon Talmor, Oyvind Tafjord, Peter Clark, Yoav
Goldberg, and Jonathan Berant. 2020. Leap-of-
thought: Teaching pre-trained models to systemati-
cally reason over implicit knowledge. arXiv preprint
arXiv:2006.06609.
Zhong Victor, Xiong Caiming, and Socher Richard.
2017. Seq2sql: Generating structured queries
from natural language using reinforcement learning.
CoRR, abs/1709.00103.
Po-Wei Wang, Priya Donti, Bryan Wilder, and Zico
Kolter. 2019. Satnet: Bridging deep learning and
logical reasoning using a differentiable satisfiabil-
ity solver. In International Conference on Machine
Learning, pages 6545–6554. PMLR.
Jason Weston, Antoine Bordes, Sumit Chopra, Alexan-
der M Rush, Bart van Merriënboer, Armand Joulin,
and Tomas Mikolov. 2015. Towards ai-complete
question answering: A set of prerequisite toy tasks.
arXiv preprint arXiv:1502.05698.
Yaqi Xie, Ziwei Xu, Mohan S Kankanhalli, Kuldeep S
Meel, and Harold Soh. 2019. Embedding sym-
bolic knowledge into deep networks. arXiv preprint
arXiv:1909.01161.
Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Ben-
gio, William W Cohen, Ruslan Salakhutdinov, and
Christopher D Manning. 2018. Hotpotqa: A dataset
for diverse, explainable multi-hop question answer-
ing. arXiv preprint arXiv:1809.09600.
A Example Interaction with the work. The following table summarizes the key pa-
Shopping Assistant rameters that are used during the fine-tuning of
Roberta-base and UnifiedQA-T5-small pretrained
The following is an example interaction with the
models. For the optimizer, we use AdamW
shopping assistant with our reasoning model inte-
(Loshchilov and Hutter, 2017).
grated with the dialogue policy.
Speaker Utterance Batch Opti- Learning
Model Epoch Weight
Size mizer Rate
User I want to buy apples Decay
Agent The top 3 relevant products are: Organic Roberta-Base 192 15 AdamW 10−5 0.1
Honycrisp Apple, Organic Gala Apple, and UnifiedQA-T5-small 1024 20 AdamW 10−5 0.0
Organic Pink Lady Apple. Which one would you
like to add to the cart?
User Which one is the cheapest?
Agent The second one, Organic Gala Apple with a
C Discussion
price of $2.49.
User Which one is the most expensive?
C.1 Generalization to non-synthesis datasets
Agent The first one, Organic Honeycripst Apple with a One of the directions that currently we are working
price of $3.99
User Is the first one cheaper than the second one? on is to create realistic (human based) conversa-
Agent No. The first one, Organic Honeycripst Apple, tions with logical reasoning use cases during the
costs $3.99 but the second one, Organic Gala interactions with the dialog systems. This type of
Apple, costs $2.49.
User Is the second one cheaper? dataset can help us to evaluate the proposed idea
Agent Yes, that is correct. The second one, Organic Gala with higher degree of confidence. Since no matter
Apple, costs $2.49. how much one spends time on generating synthetic
User Give me something cheaper
Agent Top relevant product is Fuji Apple. Would you datasets, there will always be some uncontrolled
like to add it to the cart? structures introduced by design of data simulation
User Does it cost less than two dollars? mechanisms that can corrupt the fair evaluation
Agent Yes, that is correct. Fuji Apple costs $1.89.
User Add it to my cart of deep neural network models and their learning
Agent I have added Fuji Apple to your cart. What else process. However, we believe the True/False sce-
would you like to buy?
narios in our current study are less prone to this
User Buy grapes
Agent The top 2 relevant products are: Red Seedless type of issues and are quite helpful in understating
Grapes and Conventional Cut Grapes. Which one of reasoning capabilities such as negation, numeri-
would you like to add to the cart?
cal comparison, or inclusion/exclusion of categori-
User Is the second one more popular?
Agent No. The second one, Conventional Cut Grapes cal values of our proposed algorithm, since model
has a rating of 4.3 but the first one, Red Seedless needs to learn the reasoning procedure. In other
Grapes has a rating of 4.5.
words, the only way to come up with the right pre-
User Add the highest rated one to my cart
Agent I have added Red Seedless Grapes to your cart. diction by model is to apply the underlying reason-
What else would you like to buy? ing procedure to formulate the output True/False
User Find cheese
results. We will consider: a) better algorithms for
Agent The top 2 relevant products are: Zola and Muller.
Which one would you like to add to the cart? generating training data, and b) more realistic gen-
User Anything vegan and cheaper than five eral purpose possibly human in the loop training
dollars?
Agent I couldn’t find anything vegan with price less than
data to make the data generation more general and
five dollars. But I found Vegan Artisan cheese with less domain specific, for future exploration.
price of $7.99. Would you like to add it to the cart?
User Sure C.2 Error Analysis
Agent I have added Vegan Artisan cheese to your cart.
What else would you like to buy? During our evaluation, we observed that the Trans-
User What’s in my cart? former models (such as Roberta and T5) perfor-
Agent You have 3 items in the cart with total amount
of $12.87. Do you want to hear detailed description?
mance degrades when the length of the reason-
User Checkout ing context increases, i.e., the number of items in
Agent I have placed your order. the context for reasoning are longer. Also based
on the results on Table 8, we see that increasing
the number of items in reasoning context leads
B Training Hyper-parameters
to performance degradation. Another issue with
In this section, we provide the parameters that are Transformer models or in general LM models is
used to fine-tune the transformer models in this during the output generation process beyond the
True/False scenario. When the size of the output
sequence length increases, e.g., there are several
items that all satisfy the user-query. The prediction
misses some of the items in the response after the
length of the output sequence (number of predicted
tokens/words) meets some threshold. This issue
is related to both long sequence generation of LM
models and also reasoning ability when the mul-
tiple items match the user-query’s criteria which
mostly occurs when the number of items in context
are larger.

C.3 Generalization to unseen attribute with


common values
One of the aspect that we like to understand is the
scalability/generalization of the proposed trained
reasoning model to unseen attributes during the test
time. There are two possibility for a new attribute:
(1) doesn’t shares values and keywords that user
may use to describe the attribute compared to the
attributes that are used during the training process
e.g., color attribute for experiment in Section 5 2 .
(2) shares same values but keywords that user may
use to describe the attribute doesn’t overlap with Model EM accuracy
any of the ones used during the training process, Before fine-tuning 33%
e.g., calorie 3 . It would be very challenging to After fine-tuning 80%
teach model in a few-shot manner to learn about
attributes from bucket (1). However, based on our Table 9: Model EM accuracy performance before/after
fine-tuning to new attribute calorie.
initial experiments we have seen that model can
easily generalize to the attributes from bucket (2),
by fine-tuning to small number of examples in a
few-shot manner. For example, we fine-tuned the
model which only trained for diet, flavor,
price, and rating attributes and fine-tuned
using only 100 new reasoning context examples
which had calorie attribute as well. Table 9 sum-
marize the model performance before and after
fine-tuning. The test set used for this analysis only
has user-query about calories and includes 3K ex-
amples about Calorie attribute.

2
For query about the color user may use keywords such
as: [darker, lighter, warmer, red, blue, ..., etc. ] one, and
attribute values are red, blue, dark blue, .... etc. which doesn’t
overlap with none of the attributes that we have already in
our training dataset, i.e., diet, flavor, price, and
rating
3
For query about the calories user may use keywords
such as: [healthier, higher calories, more energetic..., etc.
] one, and attribute values are numeric value that are shared
possibly with price and rating [considering we have done
unit normalization for attributes]

You might also like