As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
Domain generalization (DG), which aims to learn a model that can generalize to an unseen target domain, has recently attracted increasing research interest. A major approach is to learn domain invariant representations to avoid greedily capturing all the correlations found in source domains caused by empirical risk minimization. Nevertheless, overly emphasizing learning of domain invariant representations might lead to learning overly-compressed domain invariant representations, causing confusion of different classes in a same domain. To address this limitation, we introduce a novel dynamic domain-weighted contrastive loss, which maximizes the subdomain differences between different classes especially those belonging to the same domain, while minimizing the average distance between the points of the convex hull of the aligned source domains. We propose Multi-source domain-adversarial generalization via dynamic domain-weighted Contrastive transfer learning (MsCtrl), a novel domain-adversarial generalization framework, which optimizes the distribution alignment of source and potential target subdomains in an adversarial manner under the “control” of the aforementioned contrastive loss. Extensive experiments based on real-world datasets demonstrate significant advantages of MsCtrl over existing state-of-the-art methods.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.