2022 Naacl-Srw 18

Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

ViT5: Pretrained Text-to-Text Transformer for Vietnamese Language

Generation

Long Phan1,2 , Hieu Tran1 , Hieu Nguyen1,2 , Trieu H. Trinh1,3


1
VietAI Research
2
Case Western Reserve University
3
New York University
[email protected]

Abstract the-art results on a broad range of downstream


tasks (Devlin et al., 2018). These self-supervised
We present ViT5, a pretrained Transformer-
based encoder-decoder model for the Viet- learning methods make use of learning objectives
namese language. With T5-style self- such as Masked Language Modeling (MLM) (De-
supervised pretraining, ViT5 is trained on vlin et al., 2018) where random tokens in the
a large corpus of high-quality and diverse input sequence are masked and the model at-
Vietnamese texts. We benchmark ViT5 on tempts to predict the original tokens. The suc-
two downstream text generation tasks, Ab- cesses of pretrained models in English have in-
stractive Text Summarization and Named En-
spired new research efforts to develop pretrained
tity Recognition. Although Abstractive Text
Summarization has been widely studied for models in other languages such as Vietnamese
the English language thanks to its rich and (i.e., PhoBERT (Nguyen and Nguyen, 2020) and
large source of data, there has been min- ViBERT (Bui et al., 2020)) and Italian (Sarti and
imal research into the same task in Viet- Nissim, 2022). There are also ongoing efforts
namese, a much lower resource language. In to develop multilingual pretrained models (mT5
this work, we perform exhaustive experiments (Xue et al., 2020), mBART (Liu et al., 2020)),
on both Vietnamese Abstractive Summariza- in order to improve performance across multiple
tion and Named Entity Recognition, validat-
languages by learning both general and language-
ing the performance of ViT5 against many
other pretrained Transformer-based encoder- specific representations.
decoder models. Our experiments show that A short time ago, BARTpho (Tran et al., 2021),
ViT5 significantly outperforms existing mod- a large pretrained sequence-to-sequence model for
els and achieves state-of-the-art results on Vietnamese inheriting BART style (Lewis et al.,
Vietnamese Text Summarization. On the task 2019), demonstrated the effectiveness of pre-
of Named Entity Recognition, ViT5 is com- trained language models on Vietnamese abstrac-
petitive against previous best results from pre-
tive summarization. Nevertheless, there are some
trained encoder-based Transformer models.
Further analysis shows the importance of con- past works that have shown that T5 architecture
text length during the self-supervised pretrain- (Raffel et al., 2019) might outperform BART in
ing on downstream performance across differ- some aspects (i.e., (Phan et al., 2021a)). Inspired
ent settings. by that, we propose ViT5, trained on the Viet-
namese monolingual subset of CC100, following
1 Introduction
the architecture and training methodology in Raf-
In recent years, Transformer-based architecture fel et al. (2019). We perform exhaustive compar-
models and pretrained language models (LMs) isons on downstream performance to many differ-
have played a crucial role in the development of ent pretrained Transformer-based models (Nguyen
Natural Language Processing (NLP). Large pre- et al., 2021; Tran et al., 2021; To et al., 2021).
trained models such as ELMo (Peters et al., 2018), Specifically, we finetune the ViT5 on two sum-
GPT (Brown et al., 2020), BERT (Devlin et al., marization datasets, Wikilingua (Ladhak et al.,
2018) is trained on large corpora and have the 2020) and Vietnews (Nguyen et al., 2019), and
ability to derive contextual representation of the one Named Entity Recognition dataset (PhoNER
language(s) in the training data. After pretrain- (Truong et al., 2021)).
ing is complete, these models achieved state-of- Text summarization is an important downstream

136
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies: Student Research Workshop, pages 136 - 142
July 10-15, 2022 ©2022 Association for Computational Linguistics
task whose input is a free-form text paragraph pared a wide range of extractive methods, includ-
or document(s), and the output sequence is ex- ing unsupervised ranking methods (e.g., LexRank,
pected to be a short summarization of the input. LSA, KL-divergence), supervised learning meth-
ViT5 achieves state-of-the-art results on both two ods using TF-IDF and classifiers (e.g., Support
of the single-document summarization tasks. We Vector Machine, AdaBoost, Learning-2-rank), and
also perform an analysis on the max-length hyper- deep learning methods (e.g., Convolutional Neural
parameter for input and output sequences during Network, Long-Short Term Memory). Similarly,
self-supervised learning and showed that longer the authors (Nguyen et al., 2019) also evaluated
lengths that match the downstream document’s the extractive methods on their own dataset, which
length lead to better result. was released publicly as a benchmark for future
For NER, we reformulated the per-token clas- studies.
sification task into a generation task, where the Recent work (Quoc et al., 2021) investigated the
decoder reconstructs the original input sentence combination of a pretrained BERT model and an
with inserted Named Entity tags following each unsupervised K-means clustering algorithm on ex-
token (Phan et al., 2021b). This simple and tractive text summarization. The authors utilized
straightforward formulation achieves competitive multilingual and monolingual BERT models to
results in comparison to direct per-token classifi- encode sentence-level contextual information and
cation done on encoder-only model (Nguyen and then ranked this information using the K-means
Nguyen, 2020). algorithm. Their report showed that monolingual
models achieved better results compared when to
2 Related Work multilingual models performing the same extrac-
tive summarization tasks. However, due to the
There are lots of abstractive summarization stud- lack of studies on Vietnamese abstractive summa-
ies in English. In an early example, (Gehrmann rization, we compare both multilingual and mono-
et al., 2018) employed a bottom-up content se- lingual encoder-decoder models.
lector (BottomUp) to determine which phrases
in the source document should be part of the 3 ViT5
summary, and then a copy mechanism was ap-
plied only to pre-select phrases during decoding. In this section, we will explain our newly released
Their experiments obtained significant improve- ViT5 models, the vocabulary generation steps, the
ments on ROUGE for some canonical summariza- pretraining data, and the training setup.
tion datasets.
In recent years, pretrained language models
have been used to enhance performance on lan-
guage generation tasks. (Liu and Lapata, 2019)
0.8

developed a Transformer-based encoder-decoder


0.6
model so that pretrained language models like
Loss

BERT can be adopted for abstractive summa- 0.4


rization. Here, the authors proposed a novel
document-level BERT-based encoder (BERTSum) 0.2

and a general framework encompassing both


extractive and abstractive summarization tasks. 0.0 0.2 0.4 0.6 0.8 1.0

Based on BERTSum, Dou et al. (2021) introduced


Step 1e6

GSum that effectively used different types of guid- Figure 1: Loss curves for the masked span prediction
ance signals as input in order to generate more task were used to pretrain the ViT5 models. Larger
suitable words and more accurate summaries. This model with larger context optimizes much better, which
model accomplished state-of-the-art performance leads to better downstream performance.
on four popular English summarization datasets.
Meanwhile, there are a small number of stud-
ies on Vietnamese text summarization. Most of 3.1 Model
these focus on inspecting extractive summariza- ViT5 follows the encoder-decoder architecture
tion. The researchers (Nguyen et al., 2018) com- proposed by Vaswani et al. (2017) and the T5

137
wikilingua: Anh y bt xe ti tham gia ba tic ti mt nhà
hàng sang trng. Nh ng trong bu i tic, anh y ngã qu Anh y ã nhp vin sau khi tham gia ba tic.
xu ng và c a ti bnh vin. (He was hospitalized after attending the party.)
(He took the car to attend a party at a luxury restaurant. But
at the party, he collapsed and was taken to the hospital.)
<output_text>
ViT5 ViT5
<task_name>: <input_text>
Encoder Decoder
Bnh nhân PATIENT_ID* 75 PATIENT_ID* là GENDER* n GENDER* ,
AGE* 40 AGE* tu i , a ch  LOCATION* Qun 2 LOCATION* ,
pho_ner: Bnh nhân 75 là n , 40 tu i , a ch  LOCATION* TP. HCM LOCATION*
Qun 2 , TP. HCM (Patient PATIENT_ID* No.75 PATIENT_ID* is a GENDER*
(Patient No.75 is a female, 40 years old, and lives in female GENDER* , AGE* 40 AGE* years old, and lives
District 2, HCM city) in LOCATION* District 2 LOCATION* , LOCATION* HCM city LOCATION*)

Figure 2: An overview of ViT5 encoder-decoder architecture, with input-output examples of two downstream
tasks. For Named Entity Recognition, the decoder reconstructs the sentence with inserted Entity tags.

framework proposed by (Raffel et al., 2019). The Table 1: Input and Output Length of Finetuned
original works of T5 proposed five different con- Datasets
figs of model size: small, base, large, 3B, and 11B.
Wikilingua Vietnews
For the purpose of practical study, we adapt the
Train 13707 99134
base (310M parameters) and large (866M param-
Test 3916 22498
eters) models for ViT5 models and leave bigger
#avg body length 521 519
models for future works.
#avg abstract length 44 38
We train ViT5 models with two different in-
put and output lengths: 256 and 1024-length. We
thoroughly experimented with these two models to 4 Abstractive Summarization
have an insight into the importance of pretraining
data length for summarization tasks. For the self- 4.1 Wikilingua
supervised training learning objectives, we use the
span-corruption objective with a corruption rate of Wikilingua (Ladhak et al., 2020) is a large-scale
15%. Figure 1 shows the computed loss during the multilingual corpus for abstractive summarization
self-supervised training stage for the three models. tasks. The corpus consists of 18 languages, includ-
ing Vietnamese. These article and summary pairs
3.2 Vocabulary are extracted from WikiHow1 . These articles have
Different from some other current Vietnamese been reviewed by human authors to ensure quality.
Transformer-based language models, we find that The Vietnamese articles are translated from the
an effective vocabulary can contribute a significant original English articles and have been reviewed
improvement to our model performance. There- by WikiHow’s international translation team.
fore, we did pre-process on a 5GB subset of
our pretraining corpus with care like normalizing 4.2 Vietnews
punctuation and capitalization, splitting numbers. Vietnews (Nguyen et al., 2019) is a single-
We fixed the size of vocabulary to 36K sub-words document abstractive summarization dataset in-
and trained SentencePiece (Kudo and Richardson, cluding news data from reputable Vietnamese
2018) model on that dataset. news website (tuoitre.vn, vnexpress.net, and
3.3 Pretraining Data nguoiduatin.vn). The authors of this work re-
moved all articles related to questionnaires, ana-
We use the CC100 Dataset (Monolingual Datasets lytical comments, and weather forecasts to ensure
from Web Crawl Data) (Wenzek et al., 2020; Con- the quality of document summarization. The fi-
neau et al., 2020). The corpus contains mono- nal released dataset only includes long document
lingual data for over 100 languages. The corpus news events. The data consists of 150704 word-
was constructed using the pipeline provided by level news articles with a summary abstract and
(Wenzek et al., 2020) through processing January- body text pairs. We follow the filtering pipeline by
December 2018 Commoncrawl snapshots. The Tran et al. (2021) to deduplicate the train/dev/test
total size for the Vietnamese Corpus is 138GB dataset. The statistics after filtering are shown in
of raw text. We process and filter out 69GB of Table 1.
short paragraphs for 256-length model and 71GB
of long paragraphs for 1024-length model. 1
https://www.wikihow.com

138
Table 2: Test result on Wikilingua and Vietnews Summarization

WikiLingua Vietnews
Models
ROUGE-1 ROUGE-2 ROUGE-L ROUGE-1 ROUGE-2 ROUGE-L
Transformer
46.25 16.57 29.82 57.56 24.25 35.53
(RND2RND)
PhoBERT2PhoBERT 50.4 19.88 32.49 60.37 29.12 39.44
mBERT2mBERT 52.82 20.57 31.55 59.67 27.36 36.73
mBART 55.21 25.69 37.33 59.81 28.28 38.71
mT5 55.27 27.63 38.30 58.05 26.76 37.38
BARTpho 57.16 31.18 40.89 61.14 30.31 40.15
ViT5base 256-length 57.86 29.98 40.23 61.85 31.70 41.70
ViT5base 1024-length 58.61 31.46 41.45 62.77 33.16 42.75
ViT5large 1024-length 60.22 33.12 43.08 63.37 34.24 43.55
Notes: The best scores are in bold and second best scores are underlined. The scores in gray color are our experiments.
Code and models for reproducing our experiments: https://github.com/vietai/ViT5

4.3 Baselines 4.4.1 Wikilingua


In order to verify the effectiveness of our pro- The results of our models on Wikilingua summa-
posed methods, we compare ViT5 models with rization dataset are shown in Table 2. ViT5 mod-
the Transformer models based on (Vaswani et al., els outperform all of the experimented pretrained
2017), the ViSum BERT2BERT models (Nguyen models, achieving state-of-the-art on all ROUGE
et al., 2021), multilingual encoder-decoder model metrics. There is also a significant increase in
(Xue et al., 2020; Liu et al., 2020), and Viet- ROUGE scores when the models are pretrained
namese encoder-decoder BARTpho model (Tran on a longer input and output sequence (1024 com-
et al., 2021). The baseline transformer models (la- pared to 256).
beled RND) have a multi-head self-attention and a Both versions of ViT51024-length achieve the
feed-forward network. RND models are initialized highest results on Wikilingua summarization tasks
with random weights. For the BARTpho models, across all ROUGE metrics with ViT5large 1024-length
we follow the models set up and results released achieving state-of-the-art. There is a signifi-
by (Tran et al., 2021). All finetuned ViT5 models cant improvement in score between the base and
are conducted with a sequence length of 1024. large ViT51024-length architectures (approximately
2% for ROUGE-1, ROUGE-2, and ROUGE-L).
4.4 Results This is predictable as the number of parameters of
ViT5large (866M) is approximately 2.8 times larger
We report the results of the ViT5 models on two than ViT5base (310M).
datasets: Wikilingua and Vietnews. We do experi- There are interesting results when comparing
ments with two versions of pretraining ViT5: 256- the results of 256-length and 1024-length ver-
length and 1024-length to have an insight into the sions of ViT5base . Although the finetuning set-
importance of pretraining data’s paragraph length tings are 1024-length for both ViT5base models,
for summarization in Vietnamese. We also com- ViT5base 1024-length performs slightly better with
pare the results of ViT5base and ViT5large models. 1% higher score for ROUGE-1, ROUGE-2, and
We use ROUGE (Recall-Oriented Understudy ROUGE-L. These results are attributed to the
for Gisting Evaluation) as our benchmark metrics longer sequences during self-supervised training.
for both single document summarization datasets. As reported in Table 1, the average words in an
The metric measures the overlap of n-grams and input body of Wikilingua corpus are more than
word sequences between two candidate and ref- 256 tokens, which can be considered long docu-
erence sequences. ROUGE-1, ROUGE-2, and ments. For this reason, pretraining ViT5 on a 1024
ROUGE-L mean the overlap between unigram, sequence length corpus achieves better results on
bigram, and longest matching sequence, respec- Wikilingua summarization task.
tively. Two-out-of-three ViT5 models perform better

139
than the published BARTpho model in summa- 5 Named Entity Recognition (NER)
rizing Wikilingua corpus. This can be the result
of the quality of pretraining data. While BART- Table 3: Test results on PhoNER COVID19
pho (and PhoBERT) was trained on 20GB of news
data, ViT5 models are trained using CC100, which Models Micro-F1
is a subset of Common Crawl data. CC100 cor- XLM-Rlarge 93.8
pus contains more diverse and general representa- PhoBERTbase 94.2
tion of the Vietnamese language than news data. PhoBERTlarge 94.5
Meanwhile, Wikilingua is more of an academic or ViT5base 256-length 93.19
instruction representation than news-like text. ViT5base 1024-length 94.5
ViT5large 1024-length 93.8
Notes: The best scores are in bold.

To verify the effectiveness of ViT5 on clas-


4.4.2 Vietnews sification tasks, we test our models on
PhoNER COVID19 dataset (Truong et al.,
2021). PhoNER is a dataset for recognizing
The size of Vietnews corpus is much larger than
named entities related to the COVID19 domain
Wikilingua corpus (with 7.7% for train and 5.8%
in Vietnamese. The dataset consists of 35,000
for test set). The result of Vietnews abstractive
entities in over 10,000 sentences. The goal is to
summarization is in Table 2. Following the discus-
recognize 10 entity types related to the domain
sion of the need for an effective large pretrained
of COVID19 and epidemics topics. The dataset
encoder-decoder model in Section 1, we can see
was released and benchmarked with PhoBERT
that there is a minimum increase in performance
(Nguyen and Nguyen, 2020).
for the existing Vietnamese encoder-only model
We treat the NER classifications tasks as text-
compared to the Transformer baseline. Pretraining
to-text generating tasks with tags of labels before
on a large corpus of Vietnamese news, BARTpho
and after an entity token (Phan et al., 2021b). An
still showed its limitation in the Vietnews summa-
example of NER in text-to-text format is shown in
rization task with slightly better ROUGE scores
Figure 2. The results are shown in Table 3.
than multilingual models (mBART and mT5).
The ViT5large 1024-length model, although effec-
Our ViT5 models still achieve state-of-the-art tive in generating Vietnamese abstractive sum-
on Vietnews task for both 256 and 1024-length. marization, shows its limitation in classification
For a more specific news-domain corpus, ViT5 tasks with lower F1 scores on NER task. On the
models achieve notable results on the news do- other hand, our ViT5base 1024-length model still per-
main although being trained on a more general forms slightly better than PhoBERTbase and com-
Vietnamese natural language domain (CC100). petitively the same as the current state-of-the-art
This supports the assumption that our ViT5 mod- PhoBERTlarge on the PhoNER corpus.
els learn a better representation of the Vietnamese 6 Discussion
language even for more domain-specific summa-
rization problems. According to the results on both Wikilin-
gua and Vietnews summarization tasks (Ta-
Similar to the results discussed in Section 4.4, ble 2 and Table 4.4.2), there is a steady in-
ViT5base models when pretrained on a longer se- crease in ROUGE scores going from the base-
quence corpus (1024-length) achieve better per- line Transformer, BERT2BERT related mod-
formance in summarizing compared to a short els (PhoBERT2PhoBERT and mBERT2mBERT),
sequence corpus (256-length) across all ROUGE multilingual encoder-decoder models (mBART,
metrics. The average input length for Vietnews mT5), to pretrained monolingual models (BART-
documents is approximately the same as in the pho and ViT5). For Vietnamese summarization
Wikilingua task (more than 500 words). There- tasks, monolingual encoder-decoder models no-
fore, the quality of long sequences during self- ticeably outperform multilingual models, most
supervised training data also leads to a better sum- likely thanks to their more focused and narrower
marizing in downstream Vietnews finetuned tasks. pretraining stage.

140
Interestingly, a more general domain of pre- References
training texts can lead to a better domain-specific Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie
summarization performance. In Section 4.4.1, our Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind
ViT5 models while being trained on a more gen- Neelakantan, Pranav Shyam, Girish Sastry, Amanda
eral corpus (CC100), outperform current models Askell, Sandhini Agarwal, Ariel Herbert-Voss,
Gretchen Krueger, Tom Henighan, Rewon Child,
that are trained on news-related corpus. More Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu,
technical domains such as laws, medicals, or en- Clemens Winter, Christopher Hesse, Mark Chen,
gineering are not tested as we leave these domain- Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin
specific summarization tasks for future studies. Chess, Jack Clark, Christopher Berner, Sam Mc-
Candlish, Alec Radford, Ilya Sutskever, and Dario
The slightly better performance of
Amodei. 2020. Language models are few-shot
ViT5base 1024-length compared to ViT5base 256-length learners. CoRR, abs/2005.14165.
suggests that longer document summarization
(more than 512 tokens) need a comparatively The Viet Bui, Oanh Thi Tran, and Phuong Le-Hong.
2020. Improving sequence tagging for vietnamese
longer context length during the pretraining text using transformer-based neural models. CoRR,
stage. abs/2006.15994.

7 Conclusion Alexis Conneau, Kartikay Khandelwal, Naman Goyal,


Vishrav Chaudhary, Guillaume Wenzek, Francisco
We introduce ViT5, a pretrained sequence-to- Guzmán, Edouard Grave, Myle Ott, Luke Zettle-
sequence Transformer model for the Vietnamese moyer, and Veselin Stoyanov. 2020. Unsupervised
language. Leveraging the T5 self-supervised pre- cross-lingual representation learning at scale. In
Proceedings of the 58th Annual Meeting of the Asso-
training formulation on massive and high-quality ciation for Computational Linguistics, pages 8440–
Vietnamese corpora, we showed that finetuned 8451, Online. Association for Computational Lin-
ViT5 models are performant on both generation guistics.
and classification tasks. We exhaustively com-
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and
pare ViT5 with other pretrained formulations on Kristina Toutanova. 2018. BERT: pre-training of
both multilingual and monolingual corpora. Our deep bidirectional transformers for language under-
experiments show that ViT5 achieves state-of-the- standing. CoRR, abs/1810.04805.
art results on summarization in both Wikilingua
Zi-Yi Dou, Pengfei Liu, Hiroaki Hayashi, Zhengbao
and Vietnews corpus, and competitive results in Jiang, and Graham Neubig. 2021. Gsum: A general
generating Named Entity Recognition (NER) on framework for guided neural abstractive summariza-
the PhoNER COVID19 dataset. We also analyze tion. In Conference of the North American Chap-
and discuss the importance of context length dur- ter of the Association for Computational Linguistics
(NAACL).
ing the self-supervised pretraining stage, which
strongly influences and positively leads to better Sebastian Gehrmann, Yuntian Deng, and Alexander
downstream performance. Rush. 2018. Bottom-up abstractive summariza-
tion. In Proceedings of the 2018 Conference on
8 Acknowledgements Empirical Methods in Natural Language Process-
ing, pages 4098–4109, Brussels, Belgium. Associ-
We would like to thank the Google TPU Research ation for Computational Linguistics.
Cloud (TRC) program and VietAI for providing us Taku Kudo and John Richardson. 2018. Sentencepiece:
with free access to TPU v3-8 to train and finetune A simple and language independent subword tok-
large ViT5 models. enizer and detokenizer for neural text processing. In
EMNLP.

Faisal Ladhak, Esin Durmus, Claire Cardie, and Kath-


leen R. McKeown. 2020. Wikilingua: A new bench-
mark dataset for cross-lingual abstractive summa-
rization. CoRR, abs/2010.03093.

Mike Lewis, Yinhan Liu, Naman Goyal, Mar-


jan Ghazvininejad, Abdelrahman Mohamed, Omer
Levy, Veselin Stoyanov, and Luke Zettlemoyer.
2019. BART: denoising sequence-to-sequence pre-
training for natural language generation, translation,
and comprehension. CoRR, abs/1910.13461.

141
Yang Liu and Mirella Lapata. 2019. Text summariza- Huy Quoc To, Kiet Van Nguyen, Ngan Luu-Thuy
tion with pretrained encoders. In EMNLP/IJCNLP. Nguyen, and Anh Gia-Tuan Nguyen. 2021. Mono-
lingual versus multilingual bertology for vietnamese
Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey extractive multi-document summarization.
Edunov, Marjan Ghazvininejad, Mike Lewis, and
Luke Zettlemoyer. 2020. Multilingual denoising Nguyen Luong Tran, Duong Minh Le, and Dat Quoc
pre-training for neural machine translation. CoRR, Nguyen. 2021. Bartpho: Pre-trained sequence-to-
abs/2001.08210. sequence models for vietnamese.

Dat Quoc Nguyen and Anh Tuan Nguyen. 2020. Thinh Hung Truong, Mai Hoang Dao, and Dat Quoc
PhoBERT: Pre-trained language models for Viet- Nguyen. 2021. COVID-19 Named Entity Recog-
namese. In Findings of the Association for Com- nition for Vietnamese. In Proceedings of the 2021
putational Linguistics: EMNLP 2020, pages 1037– Conference of the North American Chapter of the
1042. Association for Computational Linguistics: Human
Language Technologies.
Hieu Nguyen, Long Phan, James Anibal, Alec Pel-
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob
tekian, and Hieu Tran. 2021. Viesum: How robust
Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz
are transformer-based models on vietnamese sum-
Kaiser, and Illia Polosukhin. 2017. Attention is all
marization?
you need. CoRR, abs/1706.03762.
Minh-Tien Nguyen, Hoang-Diep Nguyen, Thi-Hai- Guillaume Wenzek, Marie-Anne Lachaux, Alexis Con-
Nang Nguyen, and Van-Hau Nguyen. 2018. To- neau, Vishrav Chaudhary, Francisco Guzmán, Ar-
wards state-of-the-art baselines for vietnamese mand Joulin, and Edouard Grave. 2020. CCNet:
multi-document summarization. In 2018 10th In- Extracting high quality monolingual datasets from
ternational Conference on Knowledge and Systems web crawl data. In Proceedings of the 12th Lan-
Engineering (KSE), pages 85–90. guage Resources and Evaluation Conference, pages
4003–4012, Marseille, France. European Language
Van-Hau Nguyen, Thanh-Chinh Nguyen, Minh-Tien Resources Association.
Nguyen, and Nguyen Hoai. 2019. Vnds: A viet-
namese dataset for summarization. pages 375–380. Linting Xue, Noah Constant, Adam Roberts, Mi-
hir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya
Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Barua, and Colin Raffel. 2020. mt5: A mas-
Gardner, Christopher Clark, Kenton Lee, and Luke sively multilingual pre-trained text-to-text trans-
Zettlemoyer. 2018. Deep contextualized word rep- former. CoRR, abs/2010.11934.
resentations. CoRR, abs/1802.05365.

Long Phan, Hieu Tran, Daniel Le, Hieu Nguyen,


James Annibal, Alec Peltekian, and Yanfang Ye.
2021a. CoTexT: Multi-task learning with code-text
transformer. In Proceedings of the 1st Workshop
on Natural Language Processing for Programming
(NLP4Prog 2021), pages 40–47, Online. Associa-
tion for Computational Linguistics.

Long N. Phan, James T. Anibal, Hieu Tran, Shau-


rya Chanana, Erol Bahadroglu, Alec Peltekian, and
Grégoire Altan-Bonnet. 2021b. Scifive: a text-
to-text transformer model for biomedical literature.
CoRR, abs/2106.03598.

Huy To Quoc, Kiet Van Nguyen, Ngan Luu-Thuy


Nguyen, and Anh Gia-Tuan Nguyen. 2021. Mono-
lingual versus multilingual bertology for vietnamese
extractive multi-document summarization.

Colin Raffel, Noam Shazeer, Adam Roberts, Katherine


Lee, Sharan Narang, Michael Matena, Yanqi Zhou,
Wei Li, and Peter J. Liu. 2019. Exploring the limits
of transfer learning with a unified text-to-text trans-
former. CoRR, abs/1910.10683.

Gabriele Sarti and Malvina Nissim. 2022. It5: Large-


scale text-to-text pretraining for italian language un-
derstanding and generation.

142

You might also like