TLDR: token loss dynamic reweighting for reducing repetitive utterance generation

S Jiang, T Wolf, C Monz, M de Rijke - arXiv preprint arXiv:2003.11963, 2020 - arxiv.org
… is due to hard tokens, and empirically validate this by using FL. • We propose a more effective
token loss dynamic reweighting (TLDR) method for reducing repetitive generations that is …

TLDR: Token Loss Dynamic Reweighting for Reducing Repetitive Utterance

S Jiang, T Wolf, C Monz, M de Rijke - 2020 - dare.uva.nl
… finished, hard tokens are still under-learned, so that repetitive generations are more … token
loss dynamic reweighting (TLDR) that applies differentiable weights to individual token losses

Implicit unlikelihood training: Improving neural text generation with reinforcement learning

E Lagutin, D Gavrilov, P Kalaidin - arXiv preprint arXiv:2101.04229, 2021 - arxiv.org
… are still under-learned after training, making their repetition more likely to happen. This issue
is addressed by token loss dynamic reweighting (TLDR), which applies … utterance gener- …

Multi-level adaptive contrastive learning for knowledge internalization in dialogue generation

C Yang, Z Lin, L Wang, C Tian, L Pang, J Li… - arXiv preprint arXiv …, 2023 - arxiv.org
… , we propose to dynamically select negative tokens based on … ’s utterance and the knowledge
during the generation stage. … Tldr: Token loss dynamic reweighting for reducing repetitive

Straight to the gradient: Learning to use novel tokens for neural text generation

X Lin, S Han, S Joty - International Conference on Machine …, 2021 - proceedings.mlr.press
… models trained with MLE objective tend to generate repetitive tokens or phrases, we … Our
main idea is to maintain a dynamic list of novel tokens at each decoding step … In summary, …

Diverse keyphrase generation with neural unlikelihood training

H Bahuleyan, LE Asri - arXiv preprint arXiv:2010.07665, 2020 - arxiv.org
… at (1) the target token level to discourage the generation of repeating tokens; (2) the copy …
TLDR: token loss dynamic reweighting for reducing repetitive utterance generation. CoRR, …

A simple contrastive learning objective for alleviating neural text degeneration

S Jiang, R Zhang, S Vakulenko, M de Rijke - arXiv preprint arXiv …, 2022 - arxiv.org
… of repetitive tokens by contrasting them with positive tokens. … In summary, one can see
that the repetition problem can be better … TLDR: token loss dynamic reweighting for reducing

Mirostat: A neural text decoding algorithm that directly controls perplexity

S Basu, GS Ramachandran, NS Keskar… - arXiv preprint arXiv …, 2020 - arxiv.org
… 4a, we observe that repetition penalty tends to reduce percent repetition for fixed cross-…
TLDR: Token loss dynamic reweighting for reducing repetitive utterance generation. arXiv:…

[PDF][PDF] Mirostat: A perplexity-controlled neural text decoding algorithm

S Basu, GS Ramachandran, NS Keskar… - arXiv preprint arXiv …, 2020 - ask.qcloudimg.com
Token loss dynamic reweighting (TLDR) hypothesizes that … a repetition penalty in the
decoding process to reduce the … for percentage of repeated tokens across different sampling …

Feature-based detection of automated language models: tackling GPT-2, GPT-3 and Grover

L Fröhling, A Zubiaga - PeerJ Computer Science, 2021 - peerj.com
… k most likely tokens), they observe repetition feedback loops … of entries to 0, we dynamically
adapt to differing text-lengths. … We expect the filtering to decrease the share of texts without …