SRCB at semeval-2022 task 5: Pretraining based image to text late sequential fusion system for multimodal misogynous meme identification
Proceedings of the 16th International Workshop on Semantic Evaluation …, 2022•aclanthology.org
Online misogyny meme detection is an image/text multimodal classification task, the
complicated relation of image and text challenges the intelligent system's modality fusion
learning capability. In this paper, we investigate the single-stream UNITER and dual-stream
CLIP multimodal pretrained models on their capability to handle strong and weakly
correlated image/text pairs. The XGBoost classifier with image features extracted by the
CLIP model has the highest performance and being robust on domain shift. Based on this …
complicated relation of image and text challenges the intelligent system's modality fusion
learning capability. In this paper, we investigate the single-stream UNITER and dual-stream
CLIP multimodal pretrained models on their capability to handle strong and weakly
correlated image/text pairs. The XGBoost classifier with image features extracted by the
CLIP model has the highest performance and being robust on domain shift. Based on this …
Abstract
Online misogyny meme detection is an image/text multimodal classification task, the complicated relation of image and text challenges the intelligent system’s modality fusion learning capability. In this paper, we investigate the single-stream UNITER and dual-stream CLIP multimodal pretrained models on their capability to handle strong and weakly correlated image/text pairs. The XGBoost classifier with image features extracted by the CLIP model has the highest performance and being robust on domain shift. Based on this, we propose the PBR system, an ensemble system of Pretraining models, Boosting method and Rule-based adjustment, text information is fused into the system using our late sequential fusion scheme. Our system ranks 1st place on both sub-task A and sub-task B of the SemEval-2022 Task 5 Multimedia Automatic Misogyny Identification, with 0.834/0.731 macro F1 scores for sub-task A/B correspondingly.
aclanthology.org
Showing the best result for this search. See all results