Abstract is missing.
- MRAC Track 1: 2nd Workshop on Multimodal, Generative and Responsible Affective ComputingShreya Ghosh 0001, Zhixi Cai, Abhinav Dhall, Dimitrios Kollias, Roland Goecke, Tom Gedeon. 1-6 [doi]
- Wearable Sensing for Longitudinal Automatic Task AnalysisJulien Epps. 7 [doi]
- Seeing in 3D: Assistive Robotics with Advanced Computer VisionMohammed Bennamoun. 8-9 [doi]
- THE-FD: Task Hierarchical Emotion-aware for Fake DetectionWuyang Chen, Yanjie Sun, Kele Xu, Yong Dou. 10-14 [doi]
- Are You Paying Attention? Multimodal Linear Attention Transformers for Affect Prediction in Video ConversationsJia Qing Poh, John See, Neamat El Gayar, Lai-Kuan Wong. 15-23 [doi]
- W-TDL: Window-Based Temporal Deepfake LocalizationLuka Dragar, Peter Rot, Peter Peer, Vitomir Struc, Borut Batagelj. 24-29 [doi]
- Can Expression Sensitivity Improve Macro- and Micro-Expression Spotting in Long Videos?Mengjiong Bai, Roland Goecke. 30-38 [doi]
- MRAC'24 Track 2: 2nd International Workshop on Multimodal and Responsible Affective ComputingZheng Lian, Bin Liu 0041, Rui Liu 0008, Kele Xu, Erik Cambria, Guoying Zhao 0001, Björn W. Schuller, Jianhua Tao 0001. 39-40 [doi]
- MER 2024: Semi-Supervised Learning, Noise Robustness, and Open-Vocabulary Multimodal Emotion RecognitionZheng Lian, Haiyang Sun, Licai Sun, Zhuofan Wen, Siyuan Zhang, Shun Chen, Hao Gu, Jinming Zhao, Ziyang Ma, Xie Chen 0001, Jiangyan Yi, Rui Liu 0008, Kele Xu, Bin Liu 0041, Erik Cambria, Guoying Zhao 0001, Björn W. Schuller, Jianhua Tao 0001. 41-48 [doi]
- Multimodal Emotion Recognition with Vision-language Prompting and Modality DropoutAnbin Qi, Zhongliang Liu, Xinyong Zhou, Jinba Xiao, Fengrun Zhang, Qi Gan, Ming Tao, Gaozheng Zhang, Lu Zhang. 49-53 [doi]
- Early Joint Learning of Emotion Information Makes MultiModal Model Understand You BetterMengying Ge, Mingyang Li, Dongkai Tang, Pengbo Li, Kuo Liu, Shuhao Deng, Songbai Pu, Long Liu, Yang Song, Tao Zhang. 54-61 [doi]
- Audio-Guided Fusion Techniques for Multimodal Emotion AnalysisFei Gao, Pujin Shi. 62-66 [doi]
- Improving Multimodal Emotion Recognition by Leveraging Acoustic Adaptation and Visual AlignmentZhixian Zhao, Haifeng Chen, Xi Li, Dongmei Jiang, Lei Xie. 67-71 [doi]
- Leveraging Contrastive Learning and Self-Training for Multimodal Emotion Recognition with Limited Labeled SamplesQi Fan, Yutong Li, Yi Xin, Xinyu Cheng, Guanglai Gao, Miao Ma. 72-77 [doi]
- SZTU-CMU at MER2024: Improving Emotion-LLaMA with Conv-Attention for Multimodal Emotion RecognitionZebang Cheng, Shuyuan Tu, Dawei Huang, Minghan Li, Xiaojiang Peng, Zhi-Qi Cheng, Alexander G. Hauptmann. 78-87 [doi]
- Multimodal Blockwise Transformer for Robust Sentiment RecognitionZhengqin Lai, Xiaopeng Hong, Yabin Wang. 88-92 [doi]
- Robust Representation Learning for Multimodal Emotion Recognition with Contrastive Learning and MixupYunrui Cai, Runchuan Ye, Jingran Xie, Yixuan Zhou 0002, Yaoxun Xu, Zhiyong Wu 0001. 93-97 [doi]
- Facial Physiological and Emotional AnalysisZitong Yu. 98 [doi]
- Open Vocabulary Emotion Prediction Based on Large Multimodal ModelsZixing Zhang 0001, Zhongren Dong, Zhiqiang Gao, Shihao Gao, Donghao Wang, Ciqiang Chen, Yuhan Nie, Huan Zhao 0003. 99-103 [doi]
- Multimodal Emotion Captioning Using Large Language Model with Prompt EngineeringYaoxun Xu, Yixuan Zhou 0002, Yunrui Cai, Jingran Xie, Runchuan Ye, Zhiyong Wu 0001. 104-109 [doi]
- MicroEmo: Time-Sensitive Multimodal Emotion Recognition with Subtle Clue Dynamics in Video DialoguesLiyun Zhang, Zhaojie Luo, Shuqiong Wu, Yuta Nakashima. 110-115 [doi]
- Learning Noise-Robust Joint Representation for Multimodal Emotion Recognition under Incomplete Data ScenariosQi Fan, Haolin Zuo, Rui Liu 0008, Zheng Lian, Guanglai Gao. 116-124 [doi]