default search action
MRAC@>MM 2024: Melbourne, VIC, Australia
- Jianhua Tao, Shreya Ghosh, Zheng Lian, Zhixi Cai, Björn W. Schuller, Abhinav Dhall, Guoying Zhao, Dimitrios Kollias, Erik Cambria, Roland Goecke, Tom Gedeon:
Proceedings of the 2nd International Workshop on Multimodal and Responsible Affective Computing, MRAC 2024, Melbourne VIC, Australia, 28 October 2024- 1 November 2024. ACM 2024, ISBN 979-8-4007-1203-6 - Shreya Ghosh, Zhixi Cai, Abhinav Dhall, Dimitrios Kollias, Roland Goecke, Tom Gedeon:
MRAC Track 1: 2nd Workshop on Multimodal, Generative and Responsible Affective Computing. 1-6 - Julien Epps:
Wearable Sensing for Longitudinal Automatic Task Analysis. 7 - Mohammed Bennamoun:
Seeing in 3D: Assistive Robotics with Advanced Computer Vision. 8-9 - Wuyang Chen, Yanjie Sun, Kele Xu, Yong Dou:
THE-FD: Task Hierarchical Emotion-aware for Fake Detection. 10-14 - Jia Qing Poh, John See, Neamat El Gayar, Lai-Kuan Wong:
Are You Paying Attention? Multimodal Linear Attention Transformers for Affect Prediction in Video Conversations. 15-23 - Luka Dragar, Peter Rot, Peter Peer, Vitomir Struc, Borut Batagelj:
W-TDL: Window-Based Temporal Deepfake Localization. 24-29 - Mengjiong Bai, Roland Goecke:
Can Expression Sensitivity Improve Macro- and Micro-Expression Spotting in Long Videos? 30-38 - Zheng Lian, Bin Liu, Rui Liu, Kele Xu, Erik Cambria, Guoying Zhao, Björn W. Schuller, Jianhua Tao:
MRAC'24 Track 2: 2nd International Workshop on Multimodal and Responsible Affective Computing. 39-40 - Zheng Lian, Haiyang Sun, Licai Sun, Zhuofan Wen, Siyuan Zhang, Shun Chen, Hao Gu, Jinming Zhao, Ziyang Ma, Xie Chen, Jiangyan Yi, Rui Liu, Kele Xu, Bin Liu, Erik Cambria, Guoying Zhao, Björn W. Schuller, Jianhua Tao:
MER 2024: Semi-Supervised Learning, Noise Robustness, and Open-Vocabulary Multimodal Emotion Recognition. 41-48 - Anbin Qi, Zhongliang Liu, Xinyong Zhou, Jinba Xiao, Fengrun Zhang, Qi Gan, Ming Tao, Gaozheng Zhang, Lu Zhang:
Multimodal Emotion Recognition with Vision-language Prompting and Modality Dropout. 49-53 - Mengying Ge, Mingyang Li, Dongkai Tang, Pengbo Li, Kuo Liu, Shuhao Deng, Songbai Pu, Long Liu, Yang Song, Tao Zhang:
Early Joint Learning of Emotion Information Makes MultiModal Model Understand You Better. 54-61 - Fei Gao, Pujin Shi:
Audio-Guided Fusion Techniques for Multimodal Emotion Analysis. 62-66 - Zhixian Zhao, Haifeng Chen, Xi Li, Dongmei Jiang, Lei Xie:
Improving Multimodal Emotion Recognition by Leveraging Acoustic Adaptation and Visual Alignment. 67-71 - Qi Fan, Yutong Li, Yi Xin, Xinyu Cheng, Guanglai Gao, Miao Ma:
Leveraging Contrastive Learning and Self-Training for Multimodal Emotion Recognition with Limited Labeled Samples. 72-77 - Zebang Cheng, Shuyuan Tu, Dawei Huang, Minghan Li, Xiaojiang Peng, Zhi-Qi Cheng, Alexander G. Hauptmann:
SZTU-CMU at MER2024: Improving Emotion-LLaMA with Conv-Attention for Multimodal Emotion Recognition. 78-87 - Zhengqin Lai, Xiaopeng Hong, Yabin Wang:
Multimodal Blockwise Transformer for Robust Sentiment Recognition. 88-92 - Yunrui Cai, Runchuan Ye, Jingran Xie, Yixuan Zhou, Yaoxun Xu, Zhiyong Wu:
Robust Representation Learning for Multimodal Emotion Recognition with Contrastive Learning and Mixup. 93-97 - Zitong Yu:
Facial Physiological and Emotional Analysis. 98 - Zixing Zhang, Zhongren Dong, Zhiqiang Gao, Shihao Gao, Donghao Wang, Ciqiang Chen, Yuhan Nie, Huan Zhao:
Open Vocabulary Emotion Prediction Based on Large Multimodal Models. 99-103 - Yaoxun Xu, Yixuan Zhou, Yunrui Cai, Jingran Xie, Runchuan Ye, Zhiyong Wu:
Multimodal Emotion Captioning Using Large Language Model with Prompt Engineering. 104-109 - Liyun Zhang, Zhaojie Luo, Shuqiong Wu, Yuta Nakashima:
MicroEmo: Time-Sensitive Multimodal Emotion Recognition with Subtle Clue Dynamics in Video Dialogues. 110-115 - Qi Fan, Haolin Zuo, Rui Liu, Zheng Lian, Guanglai Gao:
Learning Noise-Robust Joint Representation for Multimodal Emotion Recognition under Incomplete Data Scenarios. 116-124
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.