TLDR: token loss dynamic reweighting for reducing repetitive utterance generation
… is due to hard tokens, and empirically validate this by using FL. • We propose a more effective
token loss dynamic reweighting (TLDR) method for reducing repetitive generations that is …
token loss dynamic reweighting (TLDR) method for reducing repetitive generations that is …
TLDR: Token Loss Dynamic Reweighting for Reducing Repetitive Utterance
… finished, hard tokens are still under-learned, so that repetitive generations are more … token
loss dynamic reweighting (TLDR) that applies differentiable weights to individual token losses…
loss dynamic reweighting (TLDR) that applies differentiable weights to individual token losses…
Implicit unlikelihood training: Improving neural text generation with reinforcement learning
E Lagutin, D Gavrilov, P Kalaidin - arXiv preprint arXiv:2101.04229, 2021 - arxiv.org
… are still under-learned after training, making their repetition more likely to happen. This issue
is addressed by token loss dynamic reweighting (TLDR), which applies … utterance gener- …
is addressed by token loss dynamic reweighting (TLDR), which applies … utterance gener- …
Multi-level adaptive contrastive learning for knowledge internalization in dialogue generation
… , we propose to dynamically select negative tokens based on … ’s utterance and the knowledge
during the generation stage. … Tldr: Token loss dynamic reweighting for reducing repetitive …
during the generation stage. … Tldr: Token loss dynamic reweighting for reducing repetitive …
Straight to the gradient: Learning to use novel tokens for neural text generation
… models trained with MLE objective tend to generate repetitive tokens or phrases, we … Our
main idea is to maintain a dynamic list of novel tokens at each decoding step … In summary, …
main idea is to maintain a dynamic list of novel tokens at each decoding step … In summary, …
Diverse keyphrase generation with neural unlikelihood training
H Bahuleyan, LE Asri - arXiv preprint arXiv:2010.07665, 2020 - arxiv.org
… at (1) the target token level to discourage the generation of repeating tokens; (2) the copy …
TLDR: token loss dynamic reweighting for reducing repetitive utterance generation. CoRR, …
TLDR: token loss dynamic reweighting for reducing repetitive utterance generation. CoRR, …
A simple contrastive learning objective for alleviating neural text degeneration
… of repetitive tokens by contrasting them with positive tokens. … In summary, one can see
that the repetition problem can be better … TLDR: token loss dynamic reweighting for reducing …
that the repetition problem can be better … TLDR: token loss dynamic reweighting for reducing …
Mirostat: A neural text decoding algorithm that directly controls perplexity
… 4a, we observe that repetition penalty tends to reduce percent repetition for fixed cross-…
TLDR: Token loss dynamic reweighting for reducing repetitive utterance generation. arXiv:…
TLDR: Token loss dynamic reweighting for reducing repetitive utterance generation. arXiv:…
[PDF][PDF] Mirostat: A perplexity-controlled neural text decoding algorithm
… Token loss dynamic reweighting (TLDR) hypothesizes that … a repetition penalty in the
decoding process to reduce the … for percentage of repeated tokens across different sampling …
decoding process to reduce the … for percentage of repeated tokens across different sampling …
Feature-based detection of automated language models: tackling GPT-2, GPT-3 and Grover
L Fröhling, A Zubiaga - PeerJ Computer Science, 2021 - peerj.com
… k most likely tokens), they observe repetition feedback loops … of entries to 0, we dynamically
adapt to differing text-lengths. … We expect the filtering to decrease the share of texts without …
adapt to differing text-lengths. … We expect the filtering to decrease the share of texts without …