Application of multimodal self-supervised architectures for daily life affect recognition
- The recognition of affects (an umbrella term including but not limited to emotions, mood, and stress) in daily life is crucial for maintaining mental well-being and preventing long-term health issues. Wearable devices, such as smart bands, can collect physiological data including heart rate variability, electrodermal activity, skin temperature, and acceleration facilitating daily life affect monitoring via machine learning models. However, accurately labeling this data for model evaluation is challenging in affective computing research, as individuals often provide subjective, inaccurate, or incomplete labels in their daily lives. This study introduces the adaptation of self-supervised learning architectures for multimodal daily life stress and emotion recognition tasks, focusing on self-representation and contrastive learning methods. By leveraging unlabeled multimodal physiological signals, we aim to alleviate the need for extensive labeled data and enhance model generalizability.The recognition of affects (an umbrella term including but not limited to emotions, mood, and stress) in daily life is crucial for maintaining mental well-being and preventing long-term health issues. Wearable devices, such as smart bands, can collect physiological data including heart rate variability, electrodermal activity, skin temperature, and acceleration facilitating daily life affect monitoring via machine learning models. However, accurately labeling this data for model evaluation is challenging in affective computing research, as individuals often provide subjective, inaccurate, or incomplete labels in their daily lives. This study introduces the adaptation of self-supervised learning architectures for multimodal daily life stress and emotion recognition tasks, focusing on self-representation and contrastive learning methods. By leveraging unlabeled multimodal physiological signals, we aim to alleviate the need for extensive labeled data and enhance model generalizability. Our research demonstrates that self-supervised learning can effectively learn meaningful representations from physiological data without explicit labels, offering a promising approach for developing robust affect recognition systems that can operate in dynamic and uncontrolled environments. This work represents a significant improvement in recognizing affects in the wild, with potential implications for personalized mental health support and timely interventions.…