Nov 23, 2021 · In this paper, we push sub-networks with different expected scales learn similar embedding for the same sentence.
Experimental results show that S-SimCSE outperforms the state-of-the-art SimCSE more than $1\%$ on BERT$_{base}$ and proposes a simple sentence-wise mask ...
Specifically, SimCSE feeds a pre-trained language model the same sentence twice with two independently sampled dropout masks. Then the embeddings derived from ...
Sep 10, 2024 · We evaluated the proposed S-SimCSE on several popular semantic text similarity datasets. Experimental results show that S-SimCSE outperforms the ...
Contrastive learning-based methods, such as unsup-SimCSE, have achieved state-of-the-art (SOTA) performances in learning unsupervised sentence embeddings.
May 23, 2024 · Connected Papers is a visual tool to help researchers and applied scientists find academic papers relevant to their field of work.
Abstract. This paper presents SimCSE, a simple contrastive learning framework that greatly advances the state-of-the-art sentence embeddings.
Missing: Sampled Sub- networks
【ACL2021】 Discrete Cosine Transform as Universal Sentence Encoder; 【Arxiv2021】 S-SimCSE: sampled sub-networks for contrastive learning of sentence embedding ...
Contrastive learning-based methods, such as unsup-SimCSE, have achieved state-of-the-art (SOTA) performances in learning unsupervised sentence embeddings.