default search action
16th AMTA 2024: Chicago, USA - Research Track
- Rebecca Knowles, Akiko Eriguchi, Shivali Goel:
Proceedings of the 16th Conference of the Association for Machine Translation in the Americas, AMTA 2024 - Volume 1: Research Track, Chicago, USA, September 30 - October 2, 2024. Association for Machine Translation in the Americas 2024 - Eleftheria Briakou:
AMTA Best Thesis Award Abstract: Detecting Fine-Grained Semantic Divergences to Improve Translation Understanding Across Languages. 1-3 - Séamus Lankford, Andy Way:
Leveraging LLMs for MT in Crisis Scenarios: a blueprint for low-resource languages. 4-13 - Vipin Vijayan, Braeden Bowen, Scott Grigsby, Timothy Anderson, Jeremy Gwinnup:
Adding multimodal capabilities to a text-only translation model. 14-28 - Braeden Bowen, Vipin Vijayan, Scott Grigsby, Timothy Anderson, Jeremy Gwinnup:
Detecting concrete visual tokens for Multimodal Machine Translation. 29-38 - Richard Yue, John Ortega:
Predicting Anchored Text from Translation Memories for Machine Translation Using Deep Learning Methods. 39-47 - Richard Yue, John Ortega, Kenneth Church:
On Translating Technical Terminology: A Translation Workflow for Machine-Translated Acronyms. 48-54 - Ming Qian, Chuiqing Kong:
Exploring the Advantages and Challenges of a Concept-Guided Approach in Large Language Model Aided Machine Translation: Integrating Generative AI And Human-like Cognition. 55-72 - Raj Dabre, Haiyue Song, Miriam Exel, Bianka Buschbeck, Johannes Eschbach-Dymanus, Hideki Tanaka:
How Effective is Synthetic Data and Instruction Fine-tuning for Translation with Markup using LLMs? 73-87 - Javad PourMostafa Roshan Sharami, Dimitar Shterionov, Pieter Spronck:
Guiding In-Context Learning of LLMs through Quality Estimation for Machine Translation. 88-101 - Rebecca Knowles, Samuel Larkin, Michel Simard, Marc Tessier, Gabriel Bernier-Colborne, Cyril Goutte, Chi-kiu Lo:
Some Tradeoffs in Continual Learning for Parliamentary Neural Machine Translation Systems. 102-118 - Michel Simard:
Position Paper: Should Machine Translation be Labelled as AI-Generated Content? 119-129 - Xuan Zhang, Kevin Duh:
Best Practices of Successive Halving on Neural Machine Translation and Large Language Models. 130-139 - Ali Araabi, Vlad Niculae, Christof Monz:
Entropy- and Distance-Regularized Attention Improves Low-Resource Neural Machine Translation. 140-153 - Ali Hatami, Mihael Arcan, Paul Buitelaar:
Enhancing Translation Quality by Leveraging Semantic Diversity in Multimodal Machine Translation. 154-166 - Bismarck Bamfo Odoom, Nathaniel R. Robinson, Elijah Rippeth, Luis Tavarez-Arce, Kenton Murray, Matthew Wiesner, Paul McNamee, Philipp Koehn, Kevin Duh:
Can Synthetic Speech Improve End-to-End Conversational Speech Translation? 167-177 - Natália Resende, James Hadley:
The Translator's Canvas: Using LLMs to Enhance Poetry Translation. 178-189 - Ting Liu, Chi-kiu Lo, Elizabeth Marshman, Rebecca Knowles:
Evaluation Briefs: Drawing on Translation Studies for Human Evaluation of MT. 190-208 - Yuto Kuroda, Atsushi Fujita, Tomoyuki Kajiwara:
Word-level Translation Quality Estimation Based on Optimal Transport. 209-224 - Kenneth J. Sible, David Chiang:
Improving Rare Word Translation With Dictionaries and Attention Masking. 225-235 - Inacio Vieira, Will Allred, Séamus Lankford, Sheila Castilho, Andy Way:
How Much Data is Enough Data? Fine-Tuning Large Language Models for In-House Translation: Performance Evaluation Across Multiple Dataset Sizes. 236-249 - Marta Castello, Giada Pantana, Ilaria Torre:
Examining Cognitive Biases in ChatGPT 3.5 and ChatGPT 4 through Human Evaluation and Linguistic Comparison. 250-260
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.