chen20e
chen20e
Liqun Chen 1 Zhe Gan 2 Yu Cheng 2 Linjie Li 2 Lawrence Carin 1 Jingjing Liu 2
framework for cross-domain alignment that leverages recent is represented by a feature vector, i.e., X̃ = {x̃i }ni=1 and
advances in Optimal Transport (OT). OT-based learning Ỹ = {ỹj }m
j=1 , where n and m are the number of entities in
aims to optimize for distribution matching via minimizing each domain, respectively. The scope of this paper mainly
the cost of transporting one distribution to another. We ex- focuses on tasks involving images and text, thus entities
tend this to CDA (here a domain can be language, images, here correspond to objects in an image or words in a sen-
videos, etc.). The transport plan is thus redefined as trans- tence. An image can be represented as a set of detected
porting the distribution of embeddings from one domain objects, each associated with a feature vector (e.g., from a
(e.g., language) to another (e.g., images). By minimizing pre-trained Faster RCNN (Anderson et al., 2018)). With a
the cost of the learned transport plan, we explicitly min- word embedding layer, a sentence can be represented as a
imize the embedding distance between the domains, i.e., sequence of word feature vectors.
optimizing towards better cross-domain alignment.
A deep neural network fθ (·) can be designed to take both
Specifically, we convert entities (e.g., objects, words) in X̃ and Ỹ as initial inputs, and generate contextualized rep-
each domain (e.g., image, sentence) into a graph, where resentations:
each entity is represented by a feature vector, and the graph
representations are recurrently updated via graph propaga- X, Y = fθ (X̃, Ỹ) , (1)
tion. Cross-domain alignment can then be formulated into
where X = {xi }ni=1 , Y = {yj }m j=1 , and advanced atten-
a graph matching problem, and be addressed by calculat-
tion mechanisms (Bahdanau et al., 2015; Vaswani et al.,
ing matching scores based on graph distance. In our GOT
2017) can be applied to fθ (·) to simulate soft alignment.
framework, we utilize two types of OT distance: (i) Wasser-
The final supervision signal l is then used to learn θ, i.e.,
stein distance (WD) (Peyré et al., 2019) is applied to node
the training objective is defined as:
(entity) matching, and (ii) Gromov-Wasserstein distance
(GWD) (Peyré et al., 2016) is adopted for edge (structure) L(θ) = Lsup (X, Y, l) . (2)
matching. WD only measures the distance between node
embeddings across domains, without considering topologi- Several instantiations for different tasks are summarized as
cal information encoded in the graphs. GWD, on the other follows: (i) Image-text Retrieval. X̃ and Ỹ are image and
hand, compares graph structures by measuring the distance text features, respectively. l is the binary label, indicating
between a pair of nodes within each graph. When fused whether the input image and sentence are paired or not. Here
together, the two distances allow the proposed GOT frame- fθ (·) can be the SCAN model (Lee et al., 2018), and Lsup (·)
work to effectively take into account both node and edge corresponds to ranking loss (Faghri et al., 2018; Chechik
information for better graph matching. et al., 2010). (ii) VQA. Here l denotes the ground-truth
answer, fθ (·) can be BUTD or BAN model (Anderson et al.,
The main contributions of this work are summarized as 2018; Kim et al., 2018), Lsup (·) is cross-entropy loss. (iii)
follows. (i) We propose Graph Optimal Transport (GOT),
Machine Translation. X̃ and Ỹ are textual features from
a new framework that tackles cross-domain alignment by
the source and target sentences, respectively. Here fθ (·) can
adopting Optimal Transport for graph matching. (ii) GOT is
be an encoder-decoder Transformer model (Vaswani et al.,
compatible with existing neural network models, acting as
2017), and Lsup (·) corresponds to cross-entropy loss that
an effective drop-in regularizer to the original objective. (iii)
models the conditional distribution of p(Y|X), and here l
To demonstrate the versatile generalization ability of the
is not needed. To simplify subsequent discussions, all the
proposed approach, we conduct experiments on five diverse
tasks are abstracted into fθ (·) and Lsup (·).
tasks: image-text retrieval, visual question answering, image
captioning, machine translation, and text summarization. In most previous work, the learned attention can be inter-
Results show that GOT provides consistent performance preted as a soft alignment between X̃ and Ỹ. However,
enhancement over strong baselines across all the tasks. only the final supervision signal Lsup (·) is used for model
training, thus lacking an objective explicitly encouraging
cross-domain alignment. To enforce alignment and cast a
2. Graph Optimal Transport Framework
regularizing effect on model training, we propose a new
We first introduce the problem formulation of Cross-domain objective for Cross-domain Alignment:
Alignment in Sec. 2.1, then present the proposed Graph
Optimal Transport (GOT) framework in Secs. 2.2- 2.4. L(θ) = Lsup (X, Y, l) + α · LCDA (X, Y) , (3)
et al., 2018), hence readily applicable to large deep neural = min Tij · c(xi , yj ) , (4)
T∈Π(u,v)
networks. i=1 j=1
Graph Optimal Transport for Cross-Domain Alignment
GW
Algorithm
Neural GOT
Network Distance
Intra-graph Cross-graph Transport
f✓ (·) C C
cost matrix xy plan T
cost matrix y
WD
Ỹ Y
Figure 2. Schematic computation graph of the Graph Optimal Transport (GOT) distance used for cross-domain alignment. WD is short
for Wasserstein Distance, and GWD is short for Gromov-Wasserstein Distance. See Sec. 2.1 and 2.4 for details.
where Π(u, v) = {T ∈ Rn×m + |T1m = u, T> 1n = v}, Similar to WD, in the GWD setting, c1 (xi , x0i ) and
1n denotes an n-dimensional all-one vector, and c(xi , yj ) c2 (yi , yi0 ) (corresponding to the edges) can be viewed as
is the cost function evaluating the distance between xi and two nodes in the dual graphs (Van Lint et al., 2001), where
yj . For example, the cosine distance c(xi , yj ) = 1 − edges are projected into nodes. The learned matrix T̂ now
x> i yj becomes a transport plan that helps aligning the edges in
is a popular choice. The matrix T is denoted
||xi ||2 ||yj ||2
as the transport plan, where Tij represents the amount of different graphs. Note that, the same c1 and c2 are also used
mass shifted from ui to vj . for graph construction in Sec. 2.2.
Dw (µ, ν) defines an optimal transport distance that mea- 2.4. Graph Matching via OT Distances
sures the discrepancy between each pair of samples across
Though GWD is capable of capturing edge similarity be-
the two domains. In our graph matching, this is a natural
tween graphs, it cannot be directly applied to graph align-
choice for node (entity) matching.
ment, since only the similarity between c1 (xi , x0i ) and
c2 (yi , yi0 ) is considered, without taking into account node
Gromov-Wasserstein Distance Instead of directly cal-
representations. For example, the word pair (“boy”, “girl”)
culating distances between two sets of nodes as in WD,
has similar cosine similarity as the pair (“football”, “bas-
Gromov-Wasserstein distance (GWD) (Peyré et al., 2016;
ketball”), but the semantic meanings of the two pairs are
Chowdhury & Mémoli, 2019) can be used to calculate dis-
completely different, and should not be matched.
tances between pairs of nodes within each domain, as well
as measuring how these distances compare to those in the On the other hand, WD can match nodes in different graphs,
counterpart domain. GWD in the discrete matching setting but fails to capture the similarity between edges. If there
can be formulated as follows. are duplicated entities represented by different nodes in the
Definition 2.2. Following the same notation as in Definition same graph, WD will treat them as identical and ignore
2.1, Gromov-Wasserstein distance between µ, ν is defined their neighboring relations. For example, given a sentence
as: “there is a red book on the blue desk” paired with an image
containing several desks and books in different colors, it is
Dgw (µ, ν) = inf E(x,y)∼γ,(x0 ,y0 )∼γ [L(x, y, x0 , y 0 )] difficult to correctly identity which book in the image the
γ∈Π(µ,ν)
X sentence is referring to, without understanding the relations
= min T̂ij T̂i0 j 0 L(xi , yj , x0i , yj0 ) , (5) among the objects in the image.
T̂∈Π(u,v)
i,i0 ,j,j 0
To best couple WD and GWD and unify these two distances
where L(·) is the cost function evaluating the intra-graph in a mutually-beneficial way, we propose a transport plan
structural similarity between two pairs of nodes (xi , x0i ) and T shared by both WD and GWD. Compared with naively
(yj , yj0 ), i.e., L(xi , yj , x0i , yj0 ) = kc1 (xi , x0i )−c2 (yi , yi0 )k, employing two different transport plans, we observe that
where ci , i ∈ [1, 2] are functions that evaluate node similar- this joint plan works better (see Table 8), and faster, since
ity within the same graph (e.g., the cosine similarity). we only need to solve T once (instead of twice). Intuitively,
Graph Optimal Transport for Cross-Domain Alignment
with a shared transport plan, WD and GWD can enhance Algorithm 3 Computing GOT Distance.
each other effectively, as T utilizes both node and edge 1: Input: {xi }n m
i=1 ,{yj }j=1 , hyper-parameter λ
information simultaneously. Formally, the proposed GOT 2: Compute intra-domain similarities:
distance is defined as: 3: [Cx ]ij = cos(xi , xj ), [Cy ]ij = cos(yi , yj ),
4: x0i = g1 (xi ), yj0 = g2 (yj ) // g1 , g2 denote two MLPs
5: Compute cross-domain similarities:
X
Dgot (µ, ν) = min Tij λc(xi , yj )
T∈Π(u,v) 6: Cij = cos(x0i , yj0 )
i,i0 ,j,j 0
7: if T is shared: then
+ (1 − λ)Ti0 j 0 L(xi , yj , x0i , yj0 ) . (6) 8: Update L in Algorithm 2 (Line 8) with:
9: Lunified = λC + (1 − λ)L
10: Plug in Lunified back to Algorithm 2 and solve new T
We apply the Sinkhorn algorithm (Cuturi, 2013; Cuturi & 11: Compute Dgot
Peyré, 2017) to solve WD (4) with an entropic regular- 12: else
izer (Benamou et al., 2015): 13: Apply Algorithm 1 to obtain Dw
14: Apply Algorithm 2 to obtain Dgw
X m
n X 15: Dgot = λDw + (1 − λ)Dgw
min Tij c(xi , yj ) + βH(T) , (7) 16: end if
T∈Π(u,v) 17: Return Dgot
i=1 j=1
P
where H(T) = i,j Tij log Tij , and β is the hyper-
parameter controlling the importance of the entropy term. Chen et al., 2018; Mroueh et al., 2018; Zhang et al., 2020)
Details are provided in Algorithm 1. The solver for GWD to alleviate the mode-collapse issue. Recently, it has also
can be readily developed based on Algorithm 1, where p, q been used for vision-and-language pre-training to encour-
are defined as uniform distributions (as shown in Algorithm age word-region alignment (Chen et al., 2019b). Besides
2), following Alvarez-Melis & Jaakkola (2018). With the WD, Gromov-Wassersten distance (Peyré et al., 2016) has
help of the Sinkhorn algorithm, GOT can be efficiently been proposed for distributional metric matching and ap-
implemented in popular deep learning libraries, such as plied to unsupervised machine translation (Alvarez-Melis &
PyTorch and TensorFlow. Jaakkola, 2018).
To obtain a unified solver for the GOT distance, we define There are different ways to solve the OT distance, such as
the unified cost function as: linear programming. However, this solver is not differen-
tiable, thus it cannot be applied in deep learning frameworks.
Lunified = λc(x, y) + (1 − λ)L(x, y, x0 , y 0 ) , (8)
Recently, WGAN (Arjovsky et al., 2017) proposes to ap-
where λ is the hyper-parameter for controlling the impor- proximate the dual form of WD by imposing a 1-Lipschitz
tance of different cost functions. Instead of using projected constraint on the discriminator. Note that the duality used
gradient descent or conjugated gradient descent as in Xu for WGAN is restricted to the W-1 distance, i.e., k · k. The
et al. (2019b;a); Vayer et al. (2018), we can approximate Sinkhorn algorithm was first proposed in Cuturi (2013) as a
the transport plan T by adding back Lunified in Algorithm solver for calculating an entropic regularized OT distance.
2, so that Line 9 in Algorithm 2 helps solve T for WD and Thanks to the Envelop Theorem (Cuturi & Peyré, 2017),
GWD at the same time, effectively matching both nodes and the Sinkhorn algorithm can be efficiently calculated and
edges simultaneously. The solver for calculating the GOT readily applied to neural networks. More recently, Vayer
distance is illustrated in Figure 2, and the detailed algorithm et al. (2018) proposed the fused GWD for graph matching.
is summarized in Algorithm 3. The calculated GOT distance Our proposed GOT framework enjoys the benefits of both
is used as the cross-domain alignment loss LCDA (X, Y) in Sinkhorn algorithm and fused GWD: it is (i) capable of cap-
(3), as a regularizer to update parameters θ. turing more structured information via marrying both WD
and GWD; and (ii) scalable to large datasets and trainable
with deep neural networks.
3. Related Work
Optimal Transport Wasserstein distance (WD), a.k.a. Graph Neural Network Neural networks operating on
Earth Mover’s distance, has been widely applied to machine graph data was first introduced in Gori et al. (2005) using
learning tasks. In computer vision, Rubner et al. (1998) recurrent neural networks. Later, Duvenaud et al. (2015)
uses WD to discover the structure of color distribution for proposed a convolutional neural network over graphs for
image search. In natural language processing, WD has classification tasks. However, these methods suffer from
been applied to document retrieval (Kusner et al., 2015) and scalability issues, because they need to learn node-degree-
sequence-to-sequence learning (Chen et al., 2019a). There specific weight matrices for large graphs. To alleviate this is-
are also studies adopting WD in Generative Adversarial Net- sue, Kipf & Welling (2016) proposed to use a single weight
work (GAN) (Goodfellow et al., 2014; Salimans et al., 2018; matrix per layer in the neural network, which is capable
Graph Optimal Transport for Cross-Domain Alignment
Table 1. Results on image-text retrieval evaluated on Recall@K (R@K). Upper panel: Flickr30K; lower panel: COCO.
(a) (b)
Figure 3. (a) A comparison of the inferred transport plan from GOT (top chart) and the learned attention matrix from SCAN (bottom
chart). Both serve as a lens to visualize cross-domain alignment. The horizontal axis represents image regions, and the vertical axis
represents word tokens. (b) The original image.
Model BAN BAN+GWD BAN+WD BAN+GOT annotated QA pairs on COCO images (Lin et al., 2014).
Score 66.00 66.21 66.26 66.44 For each image, an average of 3 questions are collected,
Table 2. Results (accuracy) on VQA 2.0 validation set, using with 10 candidate answers per question. The most frequent
BAN (Kim et al., 2018) as baseline. answer from the annotators is selected as the correct answer.
Following previous work (Kim et al., 2018), we take the
Model BUTD BAN-1 BAN-2 BAN-4 BAN-8 answers that appear more than 9 times in the training set as
w/o GOT 63.37 65.37 65.61 65.81 66.00 candidate answers, which results in 3129 candidates. Clas-
w/ GOT 65.01 65.68 65.88 66.10 66.44
sification accuracy is used as the evaluation metric, defined
Table 3. Results (accuracy) of applying GOT to BUTD (Anderson as min(1, # humans provided
3
ans.
).
et al., 2018) and BAN-m (Kim et al., 2018) on VQA 2.0. m
denotes the number of glimpses. The BAN model (Kim et al., 2018) is used as baseline, with
the original codebase used for fair comparison. Results are
summarized in Table 2. Both WD and GWD improve the
two distances together, GOT achieves the best performance. BAN model on the validation set, and GOT achieves further
performance lift.
Figure 3 provides visualization on the learned transport plan
in GOT and the learned attention matrix in SCAN. Both We also investigate whether different architecture designs
serve as a proxy to lend insights into the learned alignment. affect the performance gain. We consider BUTD (Anderson
As shown, the attention matrix from SCAN is much denser et al., 2018) as an additional baseline, and apply different
and noisier than the transport plan inferred by GOT. This number of glimpses m to the BAN model, denoted as BAN-
shows our model can better discover cross-domain relations m. Results are summarized in Table 3, with the following
between image-text pairs, since the inferred transport plan observations: (i) When the number of parameters in the
is more interpretable and has less ambiguity. For exam- tested model is small, such as BUTD, the improvement
ple, both the words “sidewalk” and “skateboard” match the brought by GOT is more significant. (ii) BAN-4, a sim-
corresponding image regions very well. pler model than BAN-8, when combined with GOT, can
outperform BAN-8 without using GOT (66.10 v.s. 66.00).
Because of the Envelope Theorem (Cuturi & Peyré, 2017), (iii) For complex models such as BAN-8 that might have
GOT needs to be calculated only during the forward phase of limited space for improvement, GOT is still able to achieve
model training. Therefore, it does not introduce much extra performance gain.
computation time. For example, when using the same ma-
chine for image-text retrieval experiments, SCAN required 4.2. Text Generation Tasks
6hr 34min for training and SCAN+GOT 6hr 57min.
Image Captioning We conduct experiments on image
Visual Question Answering We also consider the VQA captioning using the same COCO dataset. The same bottom-
2.0 dataset (Goyal et al., 2017), which contains human- up-attention features (Anderson et al., 2018) used in image-
Graph Optimal Transport for Cross-Domain Alignment
Reference: India’s new prime minister, Narendra Modi, is meeting his Japanese counterpart, Shinzo Abe, in Tokyo to discuss
economic and security ties, on his first major foreign visit since winning May’s election.
MLE: India ‘ s new prime minister , Narendra Modi , meets his Japanese counterpart , Shinzo Abe , in Tokyo , during his
first major foreign visit in May to discuss economic and security relations .
GOT: India ’ s new prime minister , Narendra Modi , is meeting his Japanese counterpart Shinzo Abe in Tokyo in his first
major foreign visit since his election victory in May to discuss economic and security relations.
Reference: Chinese leaders presented the Sunday ruling as a democratic breakthrough because it gives Hong Kongers a direct
vote, but the decision also makes clear that Chinese leaders would retain a firm hold on the process through a
nominating committee tightly controlled by Beijing.
MLE: The Chinese leadership presented the decision of Sunday as a democratic breakthrough , because it gives Hong
Kong citizens a direct right to vote , but the decision also makes it clear that the Chinese leadership maintains the
expiration of a nomination committee closely controlled by Beijing .
GOT: The Chinese leadership presented the decision on Sunday as a democratic breakthrough , because Hong Kong
citizens have a direct electoral right , but the decision also makes it clear that the Chinese leadership remains
firmly in hand with a nominating committee controlled by Beijing.
Table 7. Comparison of German-to-English translation examples. For each example, we show the human translation (reference) and the
translation from MLE and GOT. We highlight the key-phrase differences between reference and translation outputs in blue and red, and
denote the error in translation in bold. In the first example, GOT correctly maintains all the information in “since winning May’s election”
by translating to “since his election victory in May”, whereas MLE only generate “in May”. In the second example, GOT successfully
keeps the information “Beijing”, whereas MLE generates wrong words “expiration of”.
Model EN-VI uncased EN-DE uncased to be larger than the weight on GWD, since intuitively node
GOT (shared) 29.92 ± 0.11 26.05 ± 0.18 matching is more important than edge matching for ma-
GOT (unshared) 29.77 ± 0.12 25.89 ± 0.17
chine translation. However, both WD and GWD contribute
Table 8. Ablation study on transport plan in machine translation. to GOT achieving the best performance.
Both models were run 5 times with the same hyper-parameter
setting. 5. Conclusions
λ 0 0.1 0.3 0.5 0.8 1.0 We propose Graph Optimal Transport, a principled frame-
BLEU 28.65 29.31 29.52 29.65 29.92 29.49 work for cross-domain alignment. With the Wasserstein
Table 9. Ablation study of the hyper-parameter λ on the EN-VI and Gromov-Wasserstein distances, both intra-domain and
machine translation dataset. cross-domain relations are captured for better alignment.
Empirically, we observe that enforcing alignment can serve
“more” and “investment” in the output summary very well. as an effective regularizer for model training. Extensive ex-
periments show that the proposed method is a generic frame-
4.3. Ablation study work that can be applied to a wide range of cross-domain
tasks. For future work, we plan to apply the proposed frame-
We conduct additional ablation study on the EN-VI and
work to self-supervised representation learning.
EN-DE datasets for machine translation.
Anderson, P., He, X., Buehler, C., Teney, D., Johnson, M., Duvenaud, D. K., Maclaurin, D., Iparraguirre, J., Bom-
Gould, S., and Zhang, L. Bottom-up and top-down atten- barell, R., Hirzel, T., Aspuru-Guzik, A., and Adams, R. P.
tion for image captioning and visual question answering. Convolutional networks on graphs for learning molecular
In CVPR, 2018. fingerprints. In NeurIPS, 2015.
Antol, S. et al. Vqa: Visual question answering. In ICCV, Faghri, F., Fleet, D. J., Kiros, J. R., and Fidler, S. Vse++:
2015. Improved visual-semantic embeddings. In BMVC, 2018.
Arjovsky, M. et al. Wasserstein generative adversarial net- Gan, Z., Gan, C., He, X., Pu, Y., Tran, K., Gao, J., Carin,
works. In ICML, 2017. L., and Deng, L. Semantic compositional networks for
visual captioning. In CVPR, 2017.
Bahdanau, D., Cho, K., and Bengio, Y. Neural machine
translation by jointly learning to align and translate. In Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B.,
ICLR, 2015. Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y.
Generative adversarial nets. In NeurIPS, 2014.
Benamou, J.-D., Carlier, G., Cuturi, M., Nenna, L., and
Peyré, G. Iterative bregman projections for regularized Gori, M., Monfardini, G., and Scarselli, F. A new model for
transportation problems. SIAM Journal on Scientific Com- learning in graph domains. In IEEE International Joint
puting, 2015. Conference on Neural Networks, 2005.
Cettolo, M., Niehues, J., Stüker, S., Bentivogli, L., Cattoni, Goyal, Y., Khot, T., Summers-Stay, D., Batra, D., and
R., and Federico, M. The IWSLT 2015 evaluation cam- Parikh, D. Making the v in vqa matter: Elevating the
paign. In International Workshop on Spoken Language role of image understanding in visual question answering.
Translation, 2015. In CVPR, 2017.
Chechik, G., Sharma, V., Shalit, U., and Bengio, S. Large Graff, D., Kong, J., Chen, K., and Maeda, K. English
scale online learning of image similarity through ranking. gigaword. Linguistic Data Consortium, Philadelphia,
Journal of Machine Learning Research, 2010. 2003.
Chen, L., Dai, S., Tao, C., Zhang, H., Gan, Z., Shen, D., Gu, J., Cai, J., Joty, S. R., Niu, L., and Wang, G. Look, imag-
Zhang, Y., Wang, G., Zhang, R., and Carin, L. Adver- ine and match: Improving textual-visual cross-modal re-
sarial text generation via feature-mover’s distance. In trieval with generative models. In CVPR, 2018.
NeurIPS, 2018. Hu, Z., Shi, H., Yang, Z., Tan, B., Zhao, T., He, J., Wang, W.,
Yu, X., Qin, L., Wang, D., et al. Texar: A modularized,
Chen, L., Zhang, Y., Zhang, R., Tao, C., Gan, Z., Zhang,
versatile, and extensible toolkit for text generation. arXiv
H., Li, B., Shen, D., Chen, C., and Carin, L. Improv-
preprint arXiv:1809.00794, 2018.
ing sequence-to-sequence learning via optimal transport.
arXiv preprint arXiv:1901.06283, 2019a. Huang, Y., Wang, W., and Wang, L. Instance-aware image
and sentence matching with selective multimodal lstm.
Chen, Y.-C., Li, L., Yu, L., Kholy, A. E., Ahmed, F., Gan, Z.,
In CVPR, 2017.
Cheng, Y., and Liu, J. Uniter: Learning universal image-
text representations. arXiv preprint arXiv:1909.11740, Huang, Y., Wu, Q., Song, C., and Wang, L. Learning seman-
2019b. tic concepts and order for image and sentence matching.
In CVPR, 2018.
Chowdhury, S. and Mémoli, F. The gromov–wasserstein
distance between networks and stable network invariants. Karpathy, A. and Fei-Fei, L. Deep visual-semantic align-
Information and Inference: A Journal of the IMA, 2019. ments for generating image descriptions. In CVPR, 2015.
Cuturi, M. Sinkhorn distances: Lightspeed computation of Kim, J.-H., Jun, J., and Zhang, B.-T. Bilinear attention
optimal transport. In NeurIPS, 2013. networks. In NeurIPS, 2018.
Cuturi, M. and Peyré, G. Computational optimal transport. Kipf, T. N. and Welling, M. Semi-supervised classification
2017. with graph convolutional networks. arXiv:1609.02907,
2016.
De Goes, F. et al. An optimal transport approach to ro-
bust reconstruction and simplification of 2d shapes. In Kusner, M., Sun, Y., Kolkin, N., and Weinberger, K. From
Computer Graphics Forum, 2011. word embeddings to document distances. In ICML, 2015.
Graph Optimal Transport for Cross-Domain Alignment
Lee, K.-H. et al. Stacked cross attention for image-text Rubner, Y., Tomasi, C., and Guibas, L. J. A metric for
matching. In ECCV, 2018. distributions with applications to image databases. In
ICCV, 1998.
Li, L., Gan, Z., Cheng, Y., and Liu, J. Relation-aware graph
attention network for visual question answering. In ICCV, Rush, A. M., Chopra, S., and Weston, J. A neural atten-
2019a. tion model for abstractive sentence summarization. In
EMNLP, 2015.
Li, Y., Gu, C., Dullien, T., Vinyals, O., and Kohli, P. Graph
matching networks for learning the similarity of graph Salimans, T., Zhang, H., Radford, A., and Metaxas, D. Im-
structured objects. In ICML, 2019b. proving GANs using optimal transport. In ICLR, 2018.
Schuster, M. and Paliwal, K. K. Bidirectional recurrent neu-
Lin, C.-Y. Rouge: A package for automatic evaluation of ral networks. Transactions on Signal Processing, 1997.
summaries. Text Summarization Branches Out, 2004.
Van Lint, J. H., Wilson, R. M., and Wilson, R. M. A course
Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., in combinatorics. Cambridge university press, 2001.
Ramanan, D., Dollár, P., and Zitnick, C. L. Microsoft
COCO: Common objects in context. In ECCV, 2014. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones,
L., Gomez, A. N., Kaiser, Ł., and Polosukhin, I. Attention
Lu, J., Xiong, C., Parikh, D., and Socher, R. Knowing when is all you need. In NeurIPS, 2017.
to look: Adaptive attention via a visual sentinel for image
captioning. In CVPR, 2017. Vayer, T., Chapel, L., Flamary, R., Tavenard, R., and Courty,
N. Optimal transport for structured data with application
Luise, G., Rudi, A., Pontil, M., and Ciliberto, C. Differential on graphs. arXiv:1805.09114, 2018.
properties of sinkhorn approximation for learning with
Veličković, P., Cucurull, G., Casanova, A., Romero, A., Lio,
wasserstein distance. arXiv:1805.11897, 2018.
P., and Bengio, Y. Graph attention networks. In ICLR,
Malinowski, M. and Fritz, M. A multi-world approach 2018.
to question answering about real-world scenes based on Vinyals, O., Toshev, A., Bengio, S., and Erhan, D. Show
uncertain input. In NeurIPS, 2014. and tell: A neural image caption generator. In CVPR,
2015.
Maretic, H. P., El Gheche, M., Chierchia, G., and Frossard,
P. Got: an optimal transport framework for graph com- Xie, Y., Wang, X., Wang, R., and Zha, H. A fast
parison. In NeurIPS, 2019. proximal point method for Wasserstein distance. In
arXiv:1802.04307, 2018.
Mroueh, Y., Li, C.-L., Sercu, T., Raj, A., and Cheng, Y.
Sobolev GAN. In ICLR, 2018. Xu, H., Luo, D., and Carin, L. Scalable gromov-wasserstein
learning for graph partitioning and matching. In NeurIPS,
Nam, H., Ha, J.-W., and Kim, J. Dual attention networks 2019a.
for multimodal reasoning and matching. In CVPR, 2017.
Xu, H., Luo, D., Zha, H., and Carin, L. Gromov-wasserstein
Papineni, K., Roukos, S., Ward, T., and Zhu, W.-J. BLEU: learning for graph matching and node embedding. In
a method for automatic evaluation of machine translation. ICML, 2019b.
In ACL, 2002.
Xu, K., Ba, J., Kiros, R., Cho, K., Courville, A. C., Salakhut-
Peyré, G., Cuturi, M., and Solomon, J. Gromov-wasserstein dinov, R., Zemel, R. S., and Bengio, Y. Show, attend and
averaging of kernel and distance matrices. In ICML, 2016. tell: Neural image caption generation with visual atten-
tion. In ICML, 2015.
Peyré, G., Cuturi, M., et al. Computational optimal transport. Yang, Z., He, X., Gao, J., Deng, L., and Smola, A. Stacked
Foundations and Trends R in Machine Learning, 2019. attention networks for image question answering. In
CVPR, 2016a.
Plummer, B. A. et al. Flickr30k entities: Collecting region-
to-phrase correspondences for richer image-to-sentence Yang, Z., Yang, D., Dyer, C., He, X., Smola, A., and Hovy,
models. In ICCV, 2015. E. Hierarchical attention networks for document classifi-
cation. In NAACL, 2016b.
Ren, S., He, K., Girshick, R., and Sun, J. Faster r-cnn:
Towards real-time object detection with region proposal Yao, T., Pan, Y., Li, Y., and Mei, T. Exploring visual rela-
networks. In NeurIPS, 2015. tionship for image captioning. In ECCV, 2018.
Graph Optimal Transport for Cross-Domain Alignment
You, Q., Jin, H., Wang, Z., Fang, C., and Luo, J. Image
captioning with semantic attention. In CVPR, 2016.
Yu, Z., Yu, J., Cui, Y., Tao, D., and Tian, Q. Deep modular
co-attention networks for visual question answering. In
CVPR, 2019.
Zhang, R., Chen, C., Gan, Z., Wen, Z., Wang, W., and
Carin, L. Nested-wasserstein self-imitation learning for
sequence generation. arXiv:2001.06944, 2020.
Zheng, Z., Zheng, L., Garrett, M., Yang, Y., Xu, M., and
Shen, Y.-D. Dual-path convolutional image-text embed-
dings with instance loss. TOMM, 2020.