Refining BERT embeddings for document hashing via mutual information maximization
Existing unsupervised document hashing methods are mostly established on generative
models. Due to the difficulties of capturing long dependency structures, these methods rarely
model the raw documents directly, but instead to model the features extracted from them (eg
bag-of-words (BOW), TFIDF). In this paper, we propose to learn hash codes from BERT
embeddings after observing their tremendous successes on downstream tasks. As a first try,
we modify existing generative hashing models to accommodate the BERT embeddings …
models. Due to the difficulties of capturing long dependency structures, these methods rarely
model the raw documents directly, but instead to model the features extracted from them (eg
bag-of-words (BOW), TFIDF). In this paper, we propose to learn hash codes from BERT
embeddings after observing their tremendous successes on downstream tasks. As a first try,
we modify existing generative hashing models to accommodate the BERT embeddings …
Existing unsupervised document hashing methods are mostly established on generative models. Due to the difficulties of capturing long dependency structures, these methods rarely model the raw documents directly, but instead to model the features extracted from them (e.g. bag-of-words (BOW), TFIDF). In this paper, we propose to learn hash codes from BERT embeddings after observing their tremendous successes on downstream tasks. As a first try, we modify existing generative hashing models to accommodate the BERT embeddings. However, little improvement is observed over the codes learned from the old BOW or TFIDF features. We attribute this to the reconstruction requirement in the generative hashing, which will enforce irrelevant information that is abundant in the BERT embeddings also compressed into the codes. To remedy this issue, a new unsupervised hashing paradigm is further proposed based on the mutual information (MI) maximization principle. Specifically, the method first constructs appropriate global and local codes from the documents and then seeks to maximize their mutual information. Experimental results on three benchmark datasets demonstrate that the proposed method is able to generate hash codes that outperform existing ones learned from BOW features by a substantial margin.
arxiv.org
Showing the best result for this search. See all results