default search action
RepL4NLP@ACL 2017 Vancouver, Canada
- Phil Blunsom, Antoine Bordes, Kyunghyun Cho, Shay B. Cohen, Chris Dyer, Edward Grefenstette, Karl Moritz Hermann, Laura Rimell, Jason Weston, Scott Yih:
Proceedings of the 2nd Workshop on Representation Learning for NLP, Rep4NLP@ACL 2017, Vancouver, Canada, August 3, 2017. Association for Computational Linguistics 2017, ISBN 978-1-945626-62-3 - Pablo Gamallo:
Sense Contextualization in a Dependency-Based Compositional Distributional Model. 1-9 - Franziska Horn:
Context encoders as a simple but powerful extension of word2vec. 10-14 - Xingdi Yuan, Tong Wang, Çaglar Gülçehre, Alessandro Sordoni, Philip Bachman, Saizheng Zhang, Sandeep Subramanian, Adam Trischler:
Machine Comprehension by Text-to-Text Neural Question Generation. 15-25 - Hai Wang, Takeshi Onishi, Kevin Gimpel, David A. McAllester:
Emergent Predication Structure in Hidden State Vectors of Neural Readers. 26-36 - Joe Cheri Ross, Pushpak Bhattacharyya:
Towards Harnessing Memory Networks for Coreference Resolution. 37-42 - Dongyun Liang, Weiran Xu, Yinge Zhao:
Combining Word-Level and Character-Level Representations for Relation Classification of Informal Text. 43-47 - Xing Fan, Emilio Monti, Lambert Mathias, Markus Dreyer:
Transfer Learning for Neural Semantic Parsing. 48-56 - Yelong Shen, Po-Sen Huang, Ming-Wei Chang, Jianfeng Gao:
Modeling Large-Scale Structured Relationships with Shared Memory for Knowledge Base Completion. 57-68 - Rudolf Kadlec, Ondrej Bajgar, Jan Kleindienst:
Knowledge Base Completion: Baselines Strike Back. 69-74 - Sebastian Brarda, Philip Yeres, Samuel R. Bowman:
Sequential Attention: A Context-Aware Alignment Function for Machine Reading. 75-80 - Jan Rygl, Jan Pomikálek, Radim Rehurek, Michal Ruzicka, Vít Novotný, Petr Sojka:
Semantic Vector Encoding and Similarity Search Using Fulltext Search Engines. 81-90 - Nanyun Peng, Mark Dredze:
Multi-task Domain Adaptation for Sequence Tagging. 91-100 - Shyam Upadhyay, Kai-Wei Chang, Matt Taddy, Adam Kalai, James Y. Zou:
Beyond Bilingual: Multi-sense Word Embeddings using Multilingual Context. 101-110 - Sheng Chen, Akshay Soni, Aasish Pappu, Yashar Mehdad:
DocTag2Vec: An Embedding Based Multi-label Learning Approach for Document Tagging. 111-120 - Karol Grzegorczyk, Marcin Kurdziel:
Binary Paragraph Vectors. 121-130 - Dennis Singh Moirangthem, Jegyung Son, Minho Lee:
Representing Compositionality based on Multiple Timescales Gated Recurrent Neural Networks with Adaptive Temporal Hierarchy for Character-Level Language Models. 131-138 - Pranava Swaroop Madhyastha, Cristina España-Bonet:
Learning Bilingual Projections of Embeddings for Vocabulary Expansion in Machine Translation. 139-145 - Teresa Botschen, Hatem Mousselly Sergieh, Iryna Gurevych:
Prediction of Frame-to-Frame Relations in the FrameNet Hierarchy with Frame Embeddings. 146-156 - Holger Schwenk, Matthijs Douze:
Learning Joint Multilingual Sentence Representations with Neural Machine Translation. 157-167 - Julius Kunze, Louis Kirsch, Ilia Kurenkov, Andreas Krug, Jens Johannsmeier, Sebastian Stober:
Transfer Learning for Speech Recognition on a Budget. 168-177 - Shima Asaadi, Sebastian Rudolph:
Gradual Learning of Matrix-Space Models of Language for Sentiment Analysis. 178-185 - Fréderic Godin, Joni Dambre, Wesley De Neve:
Improving Language Modeling using Densely Connected Recurrent Neural Networks. 186-190 - Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, Kaheer Suleman:
NewsQA: A Machine Comprehension Dataset. 191-200 - Lawrence Phillips, Kyle Shaffer, Dustin Arendt, Nathan Oken Hodas, Svitlana Volkova:
Intrinsic and Extrinsic Evaluation of Spatiotemporal Text Representations in Twitter Streams. 201-210 - Shuai Tang, Hailin Jin, Chen Fang, Zhaowen Wang, Virginia R. de Sa:
Rethinking Skip-thought: A Neighborhood based Approach. 211-218 - Hannes Schulz, Jeremie Zumer, Layla El Asri, Shikhar Sharma:
A Frame Tracking Model for Memory-Enhanced Dialogue Systems. 219-227 - Çaglar Gülçehre, Francis Dutil, Adam Trischler, Yoshua Bengio:
Plan, Attend, Generate: Character-Level Neural Machine Translation with Planning. 228-234 - Paul Michel, Abhilasha Ravichander, Shruti Rijhwani:
Does the Geometry of Word Embeddings Help Document Classification? A Case Study on Persistent Homology-Based Representations. 235-240 - Sandeep Subramanian, Sai Rajeswar, Francis Dutil, Chris Pal, Aaron C. Courville:
Adversarial Generation of Natural Language. 241-251 - Yanyao Shen, Hyokun Yun, Zachary Chase Lipton, Yakov Kronrod, Animashree Anandkumar:
Deep Active Learning for Named Entity Recognition. 252-256 - Alexander Rosenberg Johansen, Richard Socher:
Learning when to skim and when to read. 257-264 - Lifu Tu, Kevin Gimpel, Karen Livescu:
Learning to Embed Words in Context for Syntactic Tasks. 265-275
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.